BRAINLY SMART GLASS
Here the Smart glass can read text using OCR and computer vision.
A camera mounted on the front of the glass for OCR and computer vision will thus allow visually challenged people also to read.
PREREQUISITES:
Here the module is needed that can capture images from a camera and by using speech synthesis module,the text in those capturd image can be converted into speech.
the following libraries need to be installed,
OpenCV
PyTesseract
eSpeak
CODING:
Import the required libraries to the code and set path where video frames are to be stored for text extraction.
Create while loop in the code which will capture real time video from camera.
Using cv2,convert the image into BGR and save it to the path previously set.
Then call Pytessarct that will open the saved vido frame for processing the image and extracting text from it.
By using eSpeak,the speech engine will convert all that text into audio and read it.
CODE
import cv2
import numpy as np
import time
import espeak as espeak
import datatime
import pytesseract
img=cv2.imread(r'/home/pi/My coding/My_Python_code/img.jpg')
previous="unknown"
video_cap=cv2.VideoCapture(0)
process=True
while True:
ret,frame=video_cap.read()
s_frame=cv2.resize(frame,(0,0),fx=0.25,fy=0.25)
rgb_s_frame=s_frame[:,:,::-1]
if process:
cv2.imwrite('/home/pi/My coding/My_Python_code/img.jpg',frame)
img=cv2.imread(r'/home/pi/My coding/My_Python_code/img.jpg')
img_rgb=cv2.cvtColor(img,cv2.COLOUR_BGR2RGB)
print(pytesseract.image_to_string(img_rgb))
espeak.set_voice("En")
espeak.synth("Hi Smart Grandpa")
espeak.synth(pytesseract.image_string(img_rgb))
cv2.imshow('Video',frame)
if cv2.waitKet(1) & 0xFF==ord('q'):
break
video.release()
bus.close()
cv2.destroyALLWindows()
TESTING:
Fix the camera onto the eyeglasses and run the code.
On putting a book in front of camera and wait few minutes.
It will automatically start reading the book.
To hear it,connect earphones to Raspberry Pi headphone TRRS jack or any speaker with amplifier.