Skip to content

Demo showing how to implement a memory layer with Alexa using the Redis Agent Memory Server to create human-like, contextual, and smart conversation experiences.

License

Notifications You must be signed in to change notification settings

redis-developer/agent-memory-server-with-alexa-demo

Repository files navigation

Agent Memory Server with Alexa Demo

Overview

my-jarvis-interaction.png This demo demonstrates how Redis Agent Memory Server can extend Amazon Alexa with conversational memory. Built using Java, LangChain4J, AWS Lambda, and Redis Cloud, it enables Alexa to recall past conversations and deliver contextual, intelligent responses. It showcases how Redis can act as a memory layer for AI assistants, enriching the natural language experience through state persistence and fast retrieval.

Table of Contents

Demo Objectives

  • Demonstrate Redis as a memory persistence layer for conversational AI.
  • Show how to integrate Redis Agent Memory Server via REST API calls.
  • Automate Alexa skill deployment using Terraform, AWS Lambda, and the ASK CLI.
  • Illustrate how Redis Cloud can support scalable AI use cases.
  • Demonstrate how to implement context engineering with LangChain4J.

Setup

Dependencies

Account Requirements

Account Description
AWS account Required to create Lambda, IAM, EC2, and CloudWatch resources.
Amazon developer account Needed to register and deploy Alexa skills.
Redis Cloud Hosts the Redis database used by the Redis Agent Memory Server.

Configuration

AWS Setup

  1. Install the AWS CLI: Installation Guide
  2. Configure your credentials:
    aws configure

Amazon Developer Account

  1. Install the ASK CLI: Installation Guide
  2. Configure your credentials:
    ask configure

Redis Cloud

  1. Enable your APIs from Redis Cloud.
  2. Export them as environment variables:
    export REDISCLOUD_ACCESS_KEY=<YOUR_API_ACCOUNT_KEY>
    export REDISCLOUD_SECRET_KEY=<YOUR_API_USER_KEY>

Terraform Configuration

  1. Create your variables file:
    cp infrastructure/terraform/terraform.tfvars.example infrastructure/terraform/terraform.tfvars
  2. Edit infrastructure/terraform/terraform.tfvars with your information:
Variable Description
payment_card_type Credit card type linked to Redis Cloud (e.g., “Visa”).
payment_card_last_four Last four digits of your card (e.g., “1234”).
essentials_plan_cloud_provider Cloud provider for Redis Cloud (e.g., “AWS”).
essentials_plan_cloud_region Region for hosting Redis (e.g., “us-east-1”).
openai_api_key API key used by the Alexa skill and Agent Memory Server.

Installation & Deployment

Once configured, deploy everything using:

./deploy.sh

When the deployment completes, note the output values including the Lambda ARN, Redis Agent Memory Server endpoint, and SSH command for validation.

You can verify if the Agent Memory Server is operational by saying:

“Alexa, ask my jarvis to check the memory server.”

Running the Demo

🗣️ Usage

Invoke your Alexa device with the invocation 'my jarvis' and try commands like:

  • "Alexa, tell my javis to remember that my favorite programming language is Java."
  • "Alexa, ask my jarvis to recall if Java is my favorite programming language."
  • "Alexa, tell my jarvis to remember I have a doctor appointment next Monday at 10 AM."
  • "Alexa, ask my jarvis to suggest what should I do for my birthday party."

Teardown

To remove all deployed resources:

./undeploy.sh

Slide Deck

📑 Agent Memory Server with Alexa Presentation
Covers demo goals, motivations for a memory layer, and architecture overview.

Architecture

Skill Handler Implementation This architecture uses an Alexa skill written in Java and hosted as an AWS Lambda function. At the code of the Lambda function, it implements a stream handler that processes user requests and responses using the Agent Memory Server as memory storage.

Chat Assistant Service As part of the stream handler implementation, it uses a Chat Assistant Service that leverages LangChain4J to manage interactions with the Agent Memory Server. This service implements context engineering, ensuring that conversations are enriched with relevant historical data stored in Redis. OpenAI is the LLM used to process and generate responses.

Known Issues

  • Initial Agent Memory Server boot-up may take several minutes before becoming reachable.
  • Alexa Developer Console may require manual linking if credentials are not fully synchronized.

Resources

Maintainers

Maintainers:

License

This project is licensed under the MIT License.

About

Demo showing how to implement a memory layer with Alexa using the Redis Agent Memory Server to create human-like, contextual, and smart conversation experiences.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published