In this system, Raspberry pi is installed with the night vision camera which helps the system to go for automation and help to find the human or any problem detected using the sound sensor and according to the sound produced it automatically goes to that area and capture the image and send it to the user using IOT technology.
The components of the robot are made up of pieces using the shelf Vex robotics kit. The pieces are connected together with hex bolts. A laptop is fitted on the top of the frame which services all the processing power of the robot. The 4 Vex Omni Directional wheels are attached with the robot for effective Movement. A Vex Spin Motor is used to drive each wheel. Phidgets Distance sensors are equipped to the frame with the support of nuts, bolts, and 6-32 screws. The forward sensor is mounted on the front frame rail along the left and right sensors are mounted on their respective side rails. A slotted angle bracket is attached with a reflective sensor that is facing downward and this bracket has a slot that allows changing the height of the sensor for the different floor surface. Equipment like motor, power, and other controls are placed on circuit boards using strips of Velcro for the purpose of flexible removal. The USB hub with Velcro and Phidgets Interface, with 6-32 screws place on the front board. The rechargeable battery with Velcro, Phidgets Controller and Phidgets Distance Sensor Interfaces with 6-32 screws are placed in the rear board. Figure 4 represents the Raspberry PI with the system peripherals.
PROPOSED SYSTEM
video
Raspberry PI (Small powerful CPU) is installed with the night vision camera which help the system to go for the automation and help to find the human involving in any problem detected using the sound using sound sensor and according to the sound produced it automatically goes to that area and capture the image and send it to nearest police station using IoT technology. Figure 1 shows the detailed architecture of the novel proposed system architecture.
The robot receives and finds any audio if the area is silent and it gets ready to move towards the audio on its predefined and active way. Any human face is detected and then it scans the place with the help of its camera. Once the image is captured, then the process is started transferring the images to the IoT website. The user receives the image with an alert sound. Figure 2 shows the rover vehicle.
Monitoring Section
Motions are detected and actions will be performed like recording, taking the photo, sending the notification. Users can add stroke to receive photos of the event via email after they occur. Keep track of all actions and alerts generated for every camera. Whenever motion is detected, the appliance records all activities and keeps an event log and history. The event log shows the time of prevalence and outline of the activity. Minimize the number of false alarms by masking motion in concern areas. Users can add a mask on the camera read to exclude concern areas from motion detection. This may minimize the number of false alarms and unwanted notifications. Figure 3 shows the tracking of the motion and capturing the photo details.
Navigation System
The ultimate goal of this robot is to reduce the time in the periodic checking process. The robot automatically finds the current route with the navigation system. Reflective tape is used to mark the turning points, all corners, and edges. The infrared sensor is used in the robot to detect the tape. The robot will travel in a straight line until it finds a reflective tape, representing a checkpoint. Internally stored data tree is having all set of data such as next checkpoint, turning directions and carry on in a straight line.
Robot Vision
Artificial intelligence is the most used in this security robot for finding and detects the attackers and gives warning alerts to the human operator. Neural networks are very highly flexible in processing the data structures which consist of an n number of nodes organized in layers. The relationship of Each node is weighted and the known weights are adjusted robotically by using the neural network’s test data.
The captured images taken by the robot by using the camera and detected messages are divided and randomly, any one of the frames are inserted into the neural network. When human pictures are detected, the neural networks should stop the robot and the alert message is given to guards.
Implementation Process
The components of the robot are made up of pieces using the shelf Vex robotics kit. The pieces are connected together with hex bolts. A laptop is fitted on the top of the frame which services all the processing power of the robot. The 4 Vex Omni Directional wheels are attached with the robot for effective Movement. A Vex Spin Motor is used to drive each wheel. Phidgets Distance sensors are equipped to the frame with the support of nuts, bolts, and 6-32 screws. The forward sensor is mounted on the front frame rail along the left and right sensors are mounted on their respective side rails. A slotted angle bracket is attached with a reflective sensor that is facing downward and this bracket has a slot that allows changing the height of the sensor for the different floor surface. Equipment like motor, power, and other controls are placed on circuit boards using strips of Velcro for the purpose of flexible removal. The USB hub with Velcro and Phidgets Interface, with 6-32 screws place on the front board. The rechargeable battery with Velcro, Phidgets Controller, and Phidgets Distance Sensor Interfaces with 6-32 screws are placed in the rear board. Figure 4 represents the Raspberry PI with the system peripherals.
Learning Algorithm
For developing a neural network, training is a key part. The expected and actual output should have a minimal difference for two neutrons based on the connection weights. For this, the neural network should compute the (EW) error derivative of the weight and a backpropagation algorithm was taken due to neurons are linear in the network. The rate of error changes the level of a unit is calculated. The EA is computed with our system to find the hidden unit in all layers before the output. In the initial state hidden unit and the output units, weights are calculated. After that weights from the EAs of that output, units are multiplied and the products is added. The resultant sum that we find is equal to the EA for the hidden unit. Before the output layer, the system calculates the hidden layer and then computes EA for other layers.
It was decided to try a number of different designs to see which had the best results. Since it takes a substantial time to train each network increasing with the resolution of the input image, the experiment was done using low- resolution pictures. The goal of the experiment was to find the optimal number of hidden layer nodes to produce the most accurate results.
Enhanced Reweight Menanism In Ensemble
Algorithm Steps
NC1=Instances taken for training (model) from class 1 (Not Crime) class in the first iteration NC2=Instances taken for training (model) from class 2 (Crime) in the first iteration
PC1=% of Instances correctly classified which was taken for training (model) from class 1 (Not Crime) in the first iteration
PC2=% of Instances correctly classified which was taken for training (model) from class 2 (Crime) in the first iteration
CRC1=Class reweight in class 1 (Not Crime) CRC2=Class reweight in class 2 (Crime)
SYSTEM DESIGN AND RESULTS
1.Requirements For Making The System
Raspberry Pi - Low-cost credit-card-sized machine that connects to a computer display or television, and uses a keyboard and mouse. It is an accomplished little device that enables a populace of all ages to walk around computing and to study how to program in Scratch and Python languages.
Night Vision HD Camera - Infrared night vision combines infrared enlightenment of spectral range between 700 to 1,000 nm with HD cameras perceptive to this light. The result, which is apparently dim to a human viewer, appears as a monochrome figure on a usual display tool.
Sound Sensor - Used to notice the sound, this sensor is used to notice the intensity of sound. IR Sensor - Specific light sensor to find a light wavelength in the Infra-Red range
DC Motor (Robot module) - Designed to change the electrical present into power that will force the workings to a robot by apply a firm degree of torque to the motor beam. Raspbian Jessie - Operating system and Raspbian is extremely optimized for the Raspberry Pi line's low recital ARM central processing unit. Figure 5 shows the block diagram.
In this System, Infrared Sensor is used to make the robot move automatically following a specific path. The sound sensor is used to know the sound in a particular area. IoT is used to send the captured image to the police station. Then Connect USB HD camera with the raspberry pi and also connect Power Bank to Raspberry pi. Plug the HDMI cable to Raspberry pi from the monitor using VGA to HDMI converter cable. Finally, Connect USB Mouse and USB keyboard to the Raspberry pi.
2.Dataset
It is important that any system has good accuracy. In order to get that accuracy, the synthetic dataset is created. To get the best-trained model, all the possible records have been created. Dataset consists of session details like Morning, Afternoon, Evening, Night, Location details city, rural and etc.,
3.Performance Metrics
This paper uses the confusion matrix and two additional metrics which were derived from the confusion matrix as the evaluation method to calculate the performance. True positive represents the positive records that are correctly classified, while True negative represents the positive records that are incorrectly classified. True negative represents negative records that are correctly classified, while false negative represents a negative class that is incorrectly classified. TABLE I gives the confusion matrix details.
Table I: Confusion Matrix
Consider a two-class imbalance problem. Class a has a very large number of records, so it is considered as majority class (M2). Class b has a very low number of records, so it is considered a minority class (M1). TABLE II discusses the various parameters used for evaluation.
4.Results
The collected dataset consists of class imbalance in the ratio of 99:1. So the results are calculated based on 2 scenarios. The first scenario takes class imbalance data as input. The second scenario takes a class balanced dataset using SMOTE.
Adaboost, Bagging, Stacking, Enhanced Reweight mechanism in the ensemble are applied to both the datasets. Results show that the new algorithm gives better accuracy in most cases. Also, the importance of balancing classes is justified based on the results. In most cases, class balanced data and new algorithm provide good accuracy. Crime prediction with 95% accuracy is the highest accuracy. This accuracy is given by new ensemble algorithm.
5.Applications
The system could applicable in most popular areas like buses, trains, movement fewer locations, outer areas.
CONCLUSION
A wide range of area surveillance is done using the night vision camera fitted on the rover and also automatic system when the sound is detected robot will follow the particular path and go to that spotted area and capture the area and send to police station server using IoT. This concept is an automatic smart way to patrolling overnight to save women.
Four algorithms namely Adaboost, Bagging, Stacking, Enhanced Reweight mechanism in the ensemble were considered in this paper. Results show that the new ensemble algorithm gives better accuracy in most cases. 95% accuracy is given by the new ensemble algorithm to correctly predict the crimes.
RESULT: