Astra Nova Speak

Published May 19, 2024
 11 hours to build
 Intermediate

Astra Nova speak is an intelligent voice-activated assistant built using HTML, CSS, and JavaScript. It is designed to accept voice commands from users, process these commands, and respond appropriately in real-time. The bot leverages modern web technologies to provide a seamless and interactive user experience directly in the browser, making it accessible without the need for additional software installations.

display image

Components Used

html
1
CSS
1
JavaScript
1
Natural Language Understanding
Process the transcribed text to understand user intent using a simple rule-based approach or an external NLP service like Dialogflow.
1
Text-to-Speech Response
Provide verbal responses to user commands using the Web Speech API or an external TTS service.
1
Voice Command Recognition
Capture and transcribe voice commands using the Web Speech API.
1
User Interface
A clean and responsive UI built with HTML and CSS for user interactions.
1
Description


Project Description: Astra Chat BotObjective

The Astra Chat Bot is a web-based application designed to interact with users through voice commands. It can understand spoken language, process the commands, and provide appropriate responses. This project utilizes modern web technologies to create a responsive and interactive user interface.

Features

  1. Voice Command Recognition: Capture and transcribe voice commands using the Web Speech API.
  2. Natural Language Understanding: Process the transcribed text to understand user intent using a simple rule-based approach or an external NLP service like Dialogflow.
  3. Text-to-Speech Response: Provide verbal responses to user commands using the Web Speech API or an external TTS service.
  4. User Interface: A clean and responsive UI built with HTML and CSS for user interactions.

Technology Stack

  • Frontend: HTML, CSS, JavaScript
  • Voice Recognition: Web Speech API
  • Text Processing: Custom JavaScript functions or Dialogflow API
  • Text-to-Speech: Web Speech API
  • Backend (optional): Node.js (if using an external NLP service or for additional processing)

Implementation Steps1. Set Up the Basic Structure

Create the basic HTML structure and include the necessary CSS and JavaScript files.

Commands Used:

Greeting Commands:

  • Hey or Hello: Initiates a conversation with the assistant, prompting it to respond with a greeting message

 

Time-Based Greetings:

  • The assistant greets the user based on the time of day:
    • Morning (before 12 PM): "Good Morning Boss..."
    • Afternoon (12 PM to 5 PM): "Good Afternoon Master..."
    • Evening (after 5 PM): "Good Evening Sir..."

   

             

Wish Me Command:

  • Wish Me: Triggers the assistant to greet the user again, based on the current time.

 

Open Website Commands:

  • Open Google: Opens the Google homepage in a new browser tab and speaks "Opening Google..."
  • Open Youtube: Opens the YouTube homepage in a new browser tab and speaks "Opening Youtube..."
  • Open Facebook: Opens the Facebook homepage in a new browser tab and speaks "Opening Facebook..."

 

Search Commands:

  • What is, Who is, What are: Initiates a search query based on the given phrase using Google Search. Opens the search results in a new browser tab and speaks "This is what I found on the internet regarding [query]".

Wikipedia Search Command:

  • Wikipedia [topic]: Searches Wikipedia for information on the given topic. Opens the Wikipedia page in a new browser tab and speaks "This is what I found on Wikipedia regarding [topic]".

Time and Date Commands:

  • Time: Speaks the current time in the format "The current time is [current time]".
  • Date: Speaks the current date in the format "Today's date is [current date]".

Calculator Command:

  • Calculator: Opens the system calculator (assuming it's associated with the "Calculator://" protocol). Speaks "Opening Calculator".

Fallback Command:

  • If none of the above commands match, the assistant performs a default Google search for the given query. Opens the search results in a new browser tab and speaks "I found some information for [query] on Google".

 

Conclusion:

In conclusion, the Astra Nova Speak project demonstrates the power of integrating voice recognition and natural language processing technologies into web applications. By leveraging HTML, CSS, and JavaScript, this project creates an interactive voice-activated assistant accessible directly in the browser.

The assistant offers a wide range of functionalities, including greetings based on the time of day, opening popular websites, performing searches on Google and Wikipedia, retrieving the current time and date, and even launching system utilities like the calculator.

Through its intuitive voice interface, users can interact with the assistant naturally, issuing commands and receiving responses in real-time. This enhances user productivity and convenience, offering a seamless way to access information and perform tasks hands-free.

Moreover, this project serves as a foundation for further enhancements and integrations. Future iterations could include additional commands, improved natural language understanding, integration with third-party APIs for expanded functionality, and even integration with smart home devices for home automation tasks.

Overall, the EchoSphere Voice Assistant project showcases the potential of voice-driven interfaces in web applications, opening up exciting possibilities for improving user experiences and streamlining workflows in various domains.

 

For live Demo :-Astra - Virtual Assistant (main--astranovaspeak.netlify.app)

Codes

Downloads

Schematic diagram Download
Comments
Ad