Sensor System For Rescue Robots
Sensor System For Rescue Robots
Scholar Commons
Electrical Engineering Senior Theses Engineering Senior Theses
6-13-2019
Emir Kusculu
Recommended Citation
Moran, Alexander and Kusculu, Emir, "Sensor System for Rescue Robots" (2019). Electrical Engineering Senior Theses. 47.
https://scholarcommons.scu.edu/elec_senior/47
This Thesis is brought to you for free and open access by the Engineering Senior Theses at Scholar Commons. It has been accepted for inclusion in
Electrical Engineering Senior Theses by an authorized administrator of Scholar Commons. For more information, please contact [email protected].
SANTA CLARA UWVERSITY
ENTITLED
BACHELOR OF SCIENCE IN
ELECTRICAL ENGINEEMNG
ABSTRACT
1
Table of contents
ABSTRACT 1
List of Figures 4
List of Tables 5
Introduction 6
Problem Statement 6
Benefit 6
Existing Solutions 7
Objective 7
Background 8
Scope 10
Requirements and Specifications 11
System 12
General overview 13
Functional Block Diagram 13
About the Block Diagram 14
Data Acquisition 14
Data Analysis 15
Technologies Used 16
Thermal Sensor 17
Digital Camera 17
LiDAR 19
Overview 19
Possibilities 19
Testing 19
Thermal Camera Testing 22
Digital Camera Testing 23
LiDAR Testing 24
Results 26
Mapping 26
Thermal Imaging 26
Picture Taking 27
Delivery 28
Stakeholder Needs 28
2
Time 29
Risk Analysis 30
Ethical Analysis 31
Introduction 31
Ethical Purpose 31
Safety 31
Methods 32
Markkula Center for Applied Ethics 32
The Virtue Approach 32
The Common Good Approach 33
Sustainability 33
Pillar of Social Equity 33
8 Frugal Design Strategies 34
Environmental Impact 36
Solutions 37
Local Recycling 37
Other Technologies 38
Future Work 38
SLAM 38
Wireless 39
Lessons Learned 39
References 40
Appendices 43
A Sensor System 43
B Schematics 45
C LiDAR Code 46
D Imaging Code (Arduino) 49
E Imaging Code (Processing) 55
F Thermal Camera Code 59
3
List of Figures
Figure 1……………………………………………………………………………... 6
Figure 2……………………………………………………………………………… 13
Figure 3……………………………………………………………………………… 15
Figure 4……………………………………………………………………………… 16
Figure 5……………………………………………………………………………… 20
Figure 6……………………………………………………………………………… 21
Figure 7……………………………………………………………………………… 21
Figure 8……………………………………………………………………………… 22
Figure 9……………………………………………………………………………… 24
Figure 10………………..…………………………………………………………… 25
Figure 11………………..…………………………………………………………… 25
Figure 12………………..…………………………………………………………… 27
Figure 13………………..…………………………………………………………… 27
Figure 14………………..…………………………………………………………… 29
4
List of Tables
Table 1…….………………………………………………………………………... 12
Table 2……………………………………………………………………………… 13
Table 3……………………………………………………………………………… 18
5
1. Introduction
1.1. Problem Statement
Rescue workers are constantly putting their safety and lives on the line, in particular,
42.6% of Firefighter deaths happen on-scene [1]. This is due in part to the need to enter unstable
buildings that are on the brink of collapsing, buildings filled with toxic gases, or environments
with temperatures the human body must avoid. The problem here is that rescue workers must
enter these buildings in order to find people trapped in the building, and oftentimes the rescuers
have no idea where or how many people are in the building. The same holds true for earthquake
events. Buildings have collapsed and are structurally unstable; however, the rescue workers must
still enter the environment to find a trapped person. A versatile sensor platform used with
appropriate robotic transport could increase safety and also improve effectiveness for rescue
workers.
1.2. Benefit
The goal for this sensor platform is to decrease the injury and death count of rescue
workers on duty while also improving the effectiveness of their missions. For the cases where a
6
building’s structural stability is compromised and it does in fact collapse, a rescue worker will
not have to perish; instead the replaceable robot will be destroyed. Also, with an effective use of
the first responders’ time, they will be capable of rescuing even more lives of those stuck in a
building.
1.4. Objective
There is a need for a sensor system to augment existing robotic solutions to disaster
situations. Our versatile system can be used on a variety of robots such as an RC car, drone, or
snake. This sensor and imaging solution will ultimately have a wide range of applications and be
portable to different mobile robotic platforms. However, the focus is on helping first responders
7
in search & rescue scenarios. Due to the small nature of these robots it will be able to get into the
tight spaces of a collapsed building or even the small openings in case of a mine collapse. It will
also be able to be used in different situations such as casual play and training games. It could
have multiple uses in indoor robotics where autonomous robots need some kind of map of its
environment. This mapping, is what we are trying to accomplish.
We are building a scanning and imaging solution that is able to map and describe its
indoor environment and structure from the perspective of a mobile platform. It will also be able
to determine if there is any life forms in these structures using heat sensing. We will use a
rotating LiDAR sensor or an ultrasonic sensor to scan an environment structurally. These sensors
will give us data on the size and shape of the larger room. Then we will use a thermal camera to
search for signatures of life. We are not totally sure a thermal camera will be the best sensor but
if we could reliably detect the unique heat signature of warm blooded life forms it would work.
Then when one is detected a digital picture will be taken. Having recorded all this information
we will then use a microcontroller to process and to relay it to a remote display or mobile
platform. Then we could use this information in coordination with a robot that is small enough to
fit into tight spaces to explore and determine if there is any signs of life. For testing we will
combine this with a small push platform that we can use to explore environments. We plan to
make our methodology versatile so it could be used with different robot types, like a snake robot.
Our main purpose is to build the detecting component that could in theory be combined with any
kind of robot to accomplish more specified tasks.
1.5. Background
After looking at the existing solutions, we can see that there is a need for a Sensor System
designed for rescue robots. LuminAid [2] itself is only capable of providing light for a first
responder, however it could not help if a first responder could not enter a space or if the space
was too unsafe to enter. Time and resources would be wasted if a worker had to move
unnecessary debri out of the way. Furthermore, if a place was unsafe to enter and the room were
to collapse after a worker entered, it would create another issue in an already problematic rescue
scenario. A robot like Scorpion [3] addresses an issue specifically designed to navigate small
8
spaces and unsafe areas, but a huge development cost of $100 Million uses a huge amount of
resources, thus the need for a universal Sensor Platform System around the cost of $400-$700
dollars would cut major costs. The only thing users would need is to design the robot meant to
host of the system.
With this platform there are many different enabling technologies. Two big mapping
technologies are Sonar and LiDAR. Sonar [4] makes use of an ultrasonic pulse which is sent out
from the sensor and then the reflected pulse’s time of return is measured to give data for
mapping capabilities. Sonar is most notably used in autonomous driving cars such as Tesla and
Waymo vehicles [5]. LiDAR [6] is similar to sonar in that it sends a signal and then measures the
return of that signal, however, instead of the signal being sound, it uses light. Think of if you
shot a laser beam at a mirror and measuring the time the reflection returns. LiDAR is normally
implemented for SLAM (Simultaneous Location And Mapping) projects [7]. Such as a Roomba
robot. SLAM will be discussed later in a later section.
Other enabling technologies include infrared [8] sensors. These are used in many
different locations such as motion detectors for lights, thermal cameras, and obstacle avoidance.
What infrared does is that it detects the infrared rays being emitted form different object in a
senor’s line of sight. Two devices that will be discussed later that implement the use of infrared
are the Kinect and the PIR (Passive Infrared) Sensor.
Given these technologies we will be capable of creating a sensor system that uses LiDAR
or Sonar to create a map of the room our system is in. Existing rotating LiDARs are capable of
creating a 2D representation of a room. In order to find mammalian life in a room, infrared
sensors solutions can be used to find that warm human body temperature inside of a room.
In future iterations, LiDAR can be enhanced. LiDAR currently has existing 3D
capabilities which can be found in the use of construction surveying. In fact, a SCU Senior
project form 2018 used LiDAR to create a 3D image of a room. With 3D imaging, our sensor
system will provide users with useful information that will allow them to make faster and more
accurate decisions. One great benefit of LiDAR is its use in SLAM. By using proper SLAM
algorithms, we can create a sensor system that will help turn any robot it is put on into an
autonomous device that can search for people and navigate on its own [7].
9
There are many different robots being developed for different scenarios. Two normal
robots are RC cars and a drone. These would be best to start off with when implementing our
system onto an actual robot. However, because our system is meant to be universal, we want our
system to work with unique robots like a robotic snake [9]. There are many more robots created
for different scenarios because every disaster scenario is different. A fire scenario would require
a very rigid and robust robot that can keep the system safe from fire and falling debris. A flood
scenario would render many robots obsolete so a boat robot and drone would need to be used.
Whatever the scenario our system needs to works with the robot.
Our classes such as COEN 11 and 12 plus Mechatronics have prepared us to deal with
Arduino coding and use of a microcontroller. However, we were not taught how to use different
codes such as Python or Java, which were needed in order to make use of our LiDAR data and to
use the software “Processing”. We needed to develop those missing coding languages.
Furthermore, the biggest learning curve resided in letting the different programs communicate
with each other.
In conclusion, to communicate with the users, we would make use of a display that is
friendly to the eyes. Like the book “The Design of Everyday Things” tells us, there is nothing
worse than putting in extra effort to understand what someone is showing you [10]. While our
system runs, the data being displayed will be readable enough to understand what it is that is
being shown without much thought. That way our sensor system experience will be seamless.
1.6. Scope
Our project will provide useful information with limited technology to our first
responders. Our system will:
1. Determine physical dimensions of a space
2. Detect life signatures
3. Snap pictures when needed
Because there are so many different scenarios that can interfere with our sensor system, we will
do a proof of concept assuming that we are in an earthquake scenario. This creates a more ideal
environment for us as we do not need to worry about fire and smoke interfering with both our
10
thermal and LiDAR sensor. In a future iteration the sensor system will be wireless with a one
hour battery life. As of now the system is wired to a computer.
11
Furthermore, there will be a remote display to show us the readable data for the user and
a mounting platform for the sensor system is required.
To elaborate more, the current model of our project is on a platform with caster wheels
that can be moved manually and the micro controllers are attached to our computer via wires. In
the next iteration, we would have it set on a remote controlled robot where the smart nature is in
the user. Where they navigate and get pictures based off what the thermal picks up.
Aspect Specification
2. System
In order to create a functional system, there needs to be three main parts to it. One, the
Sensor Data Acquisition, gathers the data from our sensors and turns it into usable data. Two,
Data Analysis, takes the gathered data and turns it into visually, human readable data. Third, a
Display, is capable of actually visually representing that data. This process is shown in Figure 2.
12
Block Overview
13
Looking at Figure 2, we can see the technologies used - LiDAR and thermal sensor and
digital camera. Their data is collected on a microcontroller and interpolated in the Data
Acquisition. The data is then sent via serial to the computer where software turns the data into
useful information. On the computer we want a future iteration where users are given a Display
Controller. They can decide the information and style of information given to them via User
Input, likely a keyboard. Then finally, the useful data is displayed into actual visual information
that is readable by the user of the system.
14
2.2.2. Data Analysis
Data Analysis is the portion where all of our code and software is. We are using softwares like
Processing, Arduino IDE, and Python. Processing is what takes the image’s MCUs and displays
an image on the screen for the user. Our Python code is turning our LiDAR and Thermal Data
into useful information so that it can be displayed on our screen. The LiDAR portion takes that
past 5 sets of measurements and creates a map of each set. Different colors are used to represent
different sets of data gathered. This allows the driver to know how the room view is changing
with robot motion and where they are. The Arduino IDE is what helps these softwares
communicate with the microcontrollers of the sensors so that we get the right data and are
triggering the right events. Currently, we are actually streaming our thermal image, so the Python
code is setup to display that for the user and compare it to their image obtained. The digital
image itself takes about 30 seconds from trigger to fully displayed image with a VGA (640x480
Pixels) sized image.
The environmental map is updated constantly and as of right now so is the thermal image.
Once this analysis is done we want the final product to have a display that resembles Figure 3.
15
Figure 4: Example Display
16
2.4. Thermal Sensor
When first looking for a thermal sensor we looked at multiple solutions. One such
solution was a Kinect and the other was a PIR sensor. These both are solutions used in devices
that help detect the presence and movement of mammalian life.
The Kinect uses “depth mapping” to identify objects. How this works is that infrared
sensors taking information from in front of the kinect create a 3D depth map inside it. Based on
this depth map, the kinect looks for joints in order to identify a person in the image and reacts
accordingly. The Kinect is also readily available for about $15-$20 with many guides online.
However, in an emergency scenario people can be lying on the ground unconscious. This means
no joints will show in the depth map and we would skip past someone still alive in a building
A PIR Passive Infrared Sensor is cheap, affordable, and easy to implement. However, it is
used in motion detection applications. Again, if someone is unconscious on the ground, we
would not be able to detect them and would thus skip past someone still alive and conscious.
The sensor we decided to go with is the AMG8833 IR Thermal Camera [15] (Product #:
3538). It has a -20oC-80oC range which is able to detect a human body temperature of 97oF
(36.1oC) - 99oF(37.2oC). It reaches all of our specs as well. It is 8x8 pixels which is at least 64
pixel resolution and has a 60ox60o angle of view which covers a great view of what’s in front of
the sensor system. The description also says it can detect a human heat signature up to 7 m away
which is above what we originally desired. This thermal sensor has an I2C Interface which
means our microcontroller MUST have this capability.
17
Mini Spy Camera with Trigger
Adjustable Pi Camera
18
2.6. LiDAR
2.6.1. Overview
The LiDAR can measure point distances using light beams and the time it takes to reflect back to the
sensor. There are multiple types of LiDARs. Basic ones are only able to measure one dimensionally(1D).
More sophisticated LiDARs can measure 2D by rotating the sensor. Even more sophisticated sensors can
measure 3D. We believed it was essential to have a LiDAR sensor because is would allow us to get the
overview of the environments layout. The closed environments in our use scenarios could be mapped
using an accurate LiDAR sensor. These maps could later help the user navigate through the environment
more efficiently.
2.6.2. Possibilities
As complexity of the sensor increases so does the cost of the sensor. Out of the options we chose to use a
2D LiDAR. After we determined the dimension of our sensor we had a few more choices to make
regarding the range, data speed, angle resolution, and software interface.
3. Testing
In order to do our testing, we decided to focus on testing each unit first. This ensures that
future problems we run into is a result of the integration and not the units themselves. Then once
each unit is working, we then did testing on the integration aspects of the project.
● Gather Thermal image data and display live image
○ By looking at a live image, we can tell if a hot object enters in view of the thermal
camera
● Write and read a file onto an SD card
○ Ensures there is no issue with the memory card itself
● Take an image and store onto SD card
19
○ We can read directly from SD card when we plug it into the computer, this
ensures proper image writing and any further problems is in the processing
● Use spacebar as a trigger to take picture and process image onto computer
○ Ensures are image and processing can be accomplished through a triggered event
● Mapped the inside of a small square box
○ Ensure that the data being displayed does indeed show a box
By the end of the testing we built our platform in the Maker Lab. We did this by laser cutting
acrylic into a box and a platform. The platform was drilled and then caster wheels were attached. The box
was glued together minus one side and this is where we placed our Microcontrollers. The Lidar was
placed on top so that there would be nothing in the way of its scan and the thermal and digital camera
were attached to the front side of the box. In Figure 5 we can see the overall design while in Figure 6 we
see how the front of the system was set up and in Figure 7 we can see the inner wiring of the box
containing the microcontrollers.
20
Figure 6 and 7: Close up of the Circuitry and Front of Sensor System
21
3.1. Thermal Camera Testing
Our thermal Camera uses I2C communication and to communicate with it we needed to
utilize an Arduino and the AdaFruit Library in order to Serial stream the data. The data is in the
form of an 8x8 array of floats. The array is in the pixel resolution and the data resolution itself is
8 bits. We had to write Python code to image that data as well. This was primarily to see if the
readings passed our threshold temperature. The thermal Camera was intended to be used as a
detector and not an imager. But in order to easily determine if this happened we decided to image
it. So our Python code is able to take the serial Streaming data and image it as an 8x8 pixel with
a color gradient. To determine if it worked we experimented but trying to detect the heat
signature of someone standing in front of it in different distances. We were able to detect
someone at 3m.
22
3.2. Digital Camera Testing
The preliminary test was to make sure the camera could take photos. Originally we just
wanted the image to be sent directly from the camera to the computer display. This proved to be
quite difficult as we tried to use Python to do the process but we would only get a fractal image
rather than human readable data. Instead we decided to go with a store first, display after
method. The advantage with this is that with future iterations we can give the user ability to
shuffle through all the photos for future reference which could be quite useful for documentation
and double checking purposes.
Because the standard microcontroller memory has no room to store the image data, we
implemented an Arduino SD Card Breakout Board. This threw in the SD Card coding library .
To ensure the SD card worked we ran sample code to write and read form the SD card. Then we
worked to write a JPEG picture onto the card, take the card out, and then connected to our
computer to display the image. After a few tweaks we saw our first JPEG file on the SD card.
We moved to the processing portion. We managed to have access to some code which displayed
images one by one on the SD card. This was not ideal, so we made the code display the image
that was taken during a triggered event. Using the spacebar as a trigger, we made it so that
Processing would work with the Arduino IDE. After many iterations, we finally managed to
trigger the camera to take a picture and Processing to take that exact picture to display it on the
users screen.
23
3.3. LiDAR Testing
24
Figure 10 and 11: Images of Room Scanned
To test our LiDAR we initially need to test if the communication to the LiDAR worked. Our
LiDAR uses Serial port Communication with a Get Surreal board as an interface. The Get
Surreal board is able to Serial Stream data in the Form of “A, Angle , Distance , Signal Strength”
.The “A” is the start of the line, Angle is in degrees of 0-360, Distance is from 0-6000, and
Signal Strength is also 0-2000. For our test we needed to write Python code that could turn the
25
radial data into a 2D map, and then plot the subsequent set of points on a graph. From this graph
we experimented to see if we could observe recognizable features of the room we were in. This
is mainly how we made sure that our system worked.
4. Results
4.1. Mapping
After setting up the code and Sensor we were able to map an average size room consistently.
There were some issues with our demonstration. It was very difficult to process the data properly
if the LiDAR was moved. To remedy this we decided to show multiple data scans at the same
time. This allowed us to properly analyze subsequent scans. Overall it worked as we designed it
with our limited resources. And there is always more functionality we can add to get more out of
our system.
26
4.3. Picture Taking
After examination of time, we learned a full VGA image of 640x480 pixels took over 30 seconds to
process. This was not ideal as we want an image to be almost immediately visible at the point of being
taken. We decided to drop the quality to 320x240 pixels and our time dropped to about 10 seconds of
27
processing. Considering Figure 13 is 320x240, this loss in quality is acceptable for the better speed
output of the image.
5. Delivery
The essence of our project is that it will be a self-navigable robot with few sensors attached to it.
It needs to be small enough to fit through tight space and easily carried so that important rescue
supplies will not be sacrificed when packing emergency vehicles or bags to the brim. Our project
will feature an RC car as our robot and two sensors. One sensor is needed for reading and
collecting data on the environment and the second one will be used to detect the presence of a
mammalian life form. We will use LiDAR and infrared respectively. Both sensors will be
interfaced with a Raspberry Pi that has the capability of sending out information in regards to the
presence of a person.
Financial:
We want our device to be widely used not only in first world countries but third worlds as well.
Especially in events of a catastrophe the less money spent on rescue tools means more money for
foods and essentials. We will attempt to find the most cost efficient parts available, but our
starting goal is to keep it under $750. We would prefer people to not worry about the cost and
focus more on using it to keep the rescue workers safe.
28
Technical:
In order for this system to function properly, we will need the sensors to be compatible. We will
look for sensor that have well documented specifications and interfaces like I2C and avoid any
sensors that require additional hardware. We want the whole system to be integrated on one
microcontroller. The battery should be capable of running the robot for at least an hour. Smaller
emergency situations usually don’t last too long. Finally, the way to communicate information
should be simple by sending out a signal and location of a robot if a person is detected.
Societal:
The point of this project is to save the lives of rescue workers and avoid any unnecessary deaths
for them. They are the real heroes of society and with their lives intact they can continue to save
more people in the long run. Our project will help maintain the work force that save us in our
most dire moments.
5.2. Time
29
We ran into a few mishaps with our project, so the timeline did fall behind in some
locations. In particular, we expected the imaging to be a quick image that was immediately
uploaded to the computer but after some research we discovered we needed to deliver the image
from the microcontroller as MCUs. This meant an inclusion to an SD card which needed test
time itself.
Along with the LiDAR, it took some trouble shooting to get it working and a bunch of
new code to analyze and read the data. Furthermore, we dipped our feet into researching info on
SLAM which pushed back the time and integrating the sensors so that they work as one cohesive
unit.
30
6. Ethical Analysis
6.1. Introduction
As engineers we need shape the world around us like any other profession. For this
reason we need to know and appreciate ethics. Due to this we will discuss the ethical
consideration we took in planning our Senior Project. These are the different ethical
examinations we made of our project. Both the main purpose and the methods we choose to
pursue is coherent with multiple ethical theories. Our ethical purpose stems with the ethical
theory of utilitarianism, providing the greatest good for the most amount of people. You can see
this in the fact that our project is trying to provide relief to rescue workers who may enter
danger. While our methods are meant to work integrated with the deontological ethical theory,
rule based ethics, the rules we choose to follow are the IEEE code of ethics and The Universal
Declaration of Human Rights. We choose to do so to ensure the highest standard of safety.
6.3. Safety
Due to the fact that our project will be on used in situations were speed and accuracy is
important, our project needs to be reliable. Not only this, but due to the fact that our project
31
might also be used in conditions where infrastructure might be damaged, we will need our
project to be self sufficient. The next important thing is that our project must not harm the
environment and the people within it.
6.4. Methods
We want our project to be in the highest of ethical standards. This is why we have done a
deep ethical examination of our projects and plan on strict methods to ensure that our project is
successful in accomplishing these standards. We plan on using extensive simulations to make
sure our project is working on par for the difficult situations it might be used in. We then plan on
consulting both industry specialist and academics to make sure that our project is operative.
32
6.5.2. The Common Good Approach
In this approach, the belief is that life in community is good in itself and the actions we decide to
take should reflect that [16]. This pushes for the idea that we should have compassion for
everyone in the community, even the vulnerable. Because Firefighters fight to help us, if we are
able to help and protect them, they can in turn continue to protect us and the entire community.
By helping to preserve the life of one person, their actions contribute to preserving the lives of
the community around us.
7. Sustainability
7.1. Pillar of Social Equity
There are three Pillars of Sustainability. Ecological Protection, Economic Development,
and Social Equity. The focus of our Sensor System for Rescue Robot is the Social Equity aspect.
The reason being is that we are after the safety and protection for the lives of the people that
serve to protect our community.
Rescue workers are put in danger every day. Fire fighters are going head first to the scene
of a fire, police are going head first into the scene of a crime, and medical responders are going
towards the scenes of both. The unfortunate part about this is that on-scene fatalities account for
many deaths for rescue workers. In particular, 42.6% of firefighter fatalities occur on-scene at a
fire. The social injustice in this scenario is that firefighters are protecting their community but
they perish. A valuable member of the community being lost and unable to help another person
caught in danger.
Although every member of society is making a mark on the community with their work,
they are not constantly risking their lives on the job. The wealthiest class of people have the
safest jobs. They work in buildings and have a nice desk to sit at with a comfy chair to make
themselves comfortable. This itself is stark contrast to the lower class. People that work in
machinery, construction, and even farms task greater health risks having to constantly go to
work. Not only can they get seriously injured by the tools they use but as time progresses their
33
bodies begin to degrade. Even with this stark contrast, there are many precautions businesses and
their workers can take to maintain their health. Rescue workers do not have the luxury of the
high class or the low class. When there is danger, they go straight to it. Then if the unfortunate
occurrence of them losing their life does occur, then it puts society as a whole at risk since there
is one less person out there who can help in our time of need.
Our objective with this project is to prevent any unnecessary deaths for rescue
workers losing their live in the line of duty. Our robot was shaped with the sole purpose of
avoiding to needlessly risk a rescue workers live in the event of an emergency or catastrophe.
Rather than send a Firefighter into a building after following an earthquake or scene of a massive
fire, we want our robot to go in first to precisely decide a course of action for these honorable
members of society. The robot is sent into a structurally unsound building and seeks out any
human life forms. Pictures are taken and sent to someone outside the building to determine if
someone is trapped inside. With this information, rescue workers can be sent to an exact room so
that they can get into the building and out as fast as possible so that their spending minimal time
in an unsafe building.
With our project, we hope to provide the Social Equity to rescue workers. They deserve it
for their noble deeds after all. Ethically, it is great that we are working to protect the people of
our society who risk their lives for the safety of others. Moreover by us focusing on helping to
keep the rescue workers safe, then it only benefits society as a whole since we retain the number
of heroes who can continue to help people with their long Life.
Ruggedization: Our project is meant to be used in situations where the building is not
structurally sound. Ours in particular is designed to be used for earthquake scenarios
34
where there is no fire, so with more time we would want our to be able to be hit with
falling debri but maintain integrity.
Simplification: Our project is designed to be quick and to the point. There is no use for
fancy interfaces. We simply want to to take pictures and send data.
Adaption: Our project is to simply show the capabilities of what our robot system can
do. It is meant to be expanded upon so that people can integrate them into robots of all
different sizes.
Reliance on local materials, manufacturing: Our parts were mostly ordered new off of
online shopping. However, if we were ever to mass produce our project, we would want
to contact local businesses or recycling centers to ask for any microcontrollers they have
no use for.
User-Centric Design: The project is meant to be used during high stress situations, so
we need a very simple interface. The only action a rescue worker would need to take is to
navigate the car with a controller. The robot itself will take pictures and send it to the
driver on a screen
35
Lightweight: Because rescue workers already have to carry so much gear on them, we
do not want to add so much extra weight they would be discouraged on using it. Our
project is designed with few sensors and a simplistic setup so that it does not intrude on
the gear that must already be carried.
The strategies that we are currently implementing into our project is Simplification,
User-centric Design, and Adaption. This is because we want to address the “Usability” aspect of
“Professional Issues and Constraints”. The biggest obstacle of introducing any new technology
to any old business is that people need to be convinced about change. If our project has very few
sensors with no bells and whistles and requires very simple user inputs, rescue workers will
better understand it and be more inclined to use it for the benefits it provides of protecting their
lives. Moreover, we know different emergency scenarios call for different robots. An RC car
would not be ideal for every building, so implementing our system on a drone or snake robot
would increase the effectiveness. With a simple system, we provide an easy use experience and
opportunity to expand our system’s capabilities
36
wasted each day of the year. That means we are filling a little over 20 olympic size swimming
pools every single day and are just emptying it.
7.4. Solutions
When looking at the CO2 emissions of a Microcontroller, 81% of them come from the
production phase [17]. The best course of action to prevent this environmental impact is if we
avoid it all together. For our design, to make it environmentally friendly, it would be ideal to go
to a electronic recycling facility and retrieve working microcontrollers they may already have.
Either way, we should use a microcontroller that has been used and neglected rather than
purchase a new one. In terms of controlling the silicon aspect of all our electronics, we would do
well to seek a vendor whose manufacturing process are aiming for net zero impact, but here we
could also seek our thermal camera, digital camera, and LiDAR from a recycling facility as this
would not expend new water to create our device.
37
7.6. Other Technologies
Biogass is an emerging technology in India. This technology shows perfectly how the
frugal design strategy of Affordability and Renewability can take place. India’s energy demand
is roughly 80% dependent on crude oil imports which is very pricey, but because of their
plethora of their waste biomass, India can create about 49 billion m3 per year of Biogass
[22].This makes it an exceptional way to save India money, especially those in poorer
communities who can reap the benefits of locally produced Biogass. Since the Biogass itself is
made from waste material, it is Indias attempt at making a circular green economy. Furthermore,
because Biogasss is produced locally, then that means that this technology also satisfies the
Reliance on Local Materials, Manufacturing strategy.
Soapen is a technology that serves to help hygiene of developing countries. As many as
3.5 million children worldwide die each year from diarrhea and respiratory infections [23]. This
is due to the lack of hygiene. The Frugal Design Strategy used most notably here is the
User-centric design. Their reasoning was that kids like to draw, but they do not take the time out
of the day to wash their hands. Sometimes the teachers have to take them individually to the
bathroom. With Soapens, kids are encouraged to draw on their hand with the soap since it is
second nature for them to do so. The teacher can then easily keep track of who washed their
hands and who did not since the colored soap will still be on their hands. Even better, the other
Frugal Strategy used is Lightweight design. These Soapens can fit easily into the child's
backpack and can be taken out for an activity during the day.
8. Future Work
8.1. SLAM
We believe that if we can add SLAM to our to our platform solution it could provide
important data to first responders. We could utilize the LiDAR data to create a 2D map of the
environment. This map will then allow the user to determine the best routes to navigate the
environment. To a map an environment multiple LiDAR scans are needed.Subsequently these
38
LiDAR scans must be put together according to the positional difference of the robot. This could
be done with accurate odometry on the chassis of the platform or it could be done by algorithms
on the LiDAR scans. We researched these algorithms and concluded that it would require
extensive programming to implement.
8.2. Wireless
Our use scenario requires the access to low infrastructure debris areas and to explore
these areas. This is why utilizing a wireless connection will greatly increase our mobility. For the
communication method on the other hand we would require the use of low frequency in order to
be able to transmit through walls and from underground. It would also need to be a stand alone
system that does not require infrastructure like WiFi to work in natural disaster situations. Even
with a low bit-rate communication we would be able to communicate out sensor data because we
intentionally designed our platform to work with low bandwidth.
9. Lessons Learned
Starting this project we initially had an idea. Our idea was a platform system that could
scan the room it was in. But we really did not know of all the challenges that we had to
overcome. One of the first things we learned was that there was not a single communication
protocol amongst electronic devices. Every part we looked had different communication protocol
and so we decided to look into all the primary ones. These were UART, Serial, I2C and
MISO/MOSI.
Secondly we had to learn how to code in different languages to bring together each
component. We used PYTHON, JAVA, and C. This coding greatly improved our abilities.
Because we were able to practice them in this project.
39
References
[2] "LuminAID Solar Lanterns and Solar 2-in-1 Phone Chargers", LuminAID, 2019. [Online]. Available:
https://luminaid.com/. [Accessed: 12- Jun- 2019]
[3] R. Cheng, "Inside Fukushima: Standing 60 feet from a nuclear disaster - Video", CNET, 2019. [Online].
Available: https://www.cnet.com/videos/inside-fukushima-standing-60-feet-from-a-nuclear-disaster/.
[Accessed: 12- Jun- 2019]
[4] R. Burnett, "Understanding How Ultrasonic Sensors Work - MaxBotix Inc.", MaxBotix Inc., 2019.
[Online]. Available: https://www.maxbotix.com/articles/how-ultrasonic-sensors-work.htm. [Accessed:
12- Jun- 2019]
[6] D. Lv, X. Ying, Y. Cui, J. Song, K. Qian and M. Li, "Research on the technology of LIDAR data
processing," 2017 First International Conference on Electronics Instrumentation & Information Systems
(EIIS), Harbin, 2017, pp. 1-5.
[7] K. Song et al. , "Navigation Control Design of a Mobile Robot by Integrating Obstacle Avoidance and
LiDAR SLAM," 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC),
Miyazaki, Japan, 2018, pp. 1833-1838.
[8] R. T. Valadas, A. R. Tavares, A. M. d. Duarte, A. C. Moreira and C. T. Lomba, "The infrared physical
layer of the IEEE 802.11 standard for wireless local area networks," in IEEE Communications Magazine,
vol. 36, no. 12, pp. 107-112, Dec. 1998.
[9] M. Simon, "This Robot Snake Means You No Harm, Really", WIRED, 2019. [Online]. Available:
https://www.wired.com/story/this-robot-snake-means-you-no-harm-really/. [Accessed: 12- Jun- 2019]
[10] D. Norman, The Design of Everyday Things. New York: BasicBooks, 2006.
40
[11] K. B. Bharath, K. V. Kumaraswamy and R. K. Swamy, "Design of arbitrated I2C protocol with DO-254
compliance," 2016 International Conference on Emerging Technological Trends (ICETT), Kollam, 2016,
pp. 1-5.
[12] IEEE Standard for a High-Performance Serial Bus - Redline," in IEEE Std 1394-2008 (Revision of IEEE
Std 1394-1995) - Redline , vol., no., pp.1-1074, 21 Oct. 2008
[13] H. -. Hsin, "Texture segmentation in the joint photographic expert group 2000 domain," in IET Image
Processing, vol. 5, no. 6, pp. 554-559, Sept. 2011.
[14] RP 167:1995 - SMPTE Recommended Practice - Alignment of NTSC Color Picture Monitors," in RP
167:1995 , vol., no., pp.1-7, 11 Feb. 1995
[15] A. Industries, "Adafruit Industries, Unique & fun DIY electronics and kits", Adafruit.com, 2019. [Online].
Available: https://www.adafruit.com/. [Accessed: 12- Jun- 2019]
[16] S. University, "A Framework for Ethical Decision Making", Scu.edu, 2019. [Online]. Available:
https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/a-framework-for-ethical-decision-m
aking/. [Accessed: 12- Jun- 2019]
[18]IC Insights, "Microcontroller Unit Shipments Surge but Falling Prices Sap Sales Growth",
Icinsights.com, 2015. [Online]. Available:
http://www.icinsights.com/news/bulletins/Microcontroller-Unit-Shipments-Surge-But-Falling-P
rices-Sap-Sales-Growth/. [Accessed: 25- Mar- 2019].
[20]TDR, "Free Ewaste Pick up and Drop off - Electronic Recycling", TDR Electronic Recycling,
2019. [Online]. Available: https://tdrelectronicrecycling.com/. [Accessed: 25- Mar- 2019].
41
[21]Recology, "Cultural Impact - Recology", Recology, 2019. [Online]. Available:
https://www.recology.com/cultural-impact/#art-of-recology. [Accessed: 25- Mar- 2019].
[23]R. Goodier, "Shh.. Playing with Soap Actually Prevents Disease and Childhood
Mortality", engineeringforchange.org, 2017. [Online]. Available:
https://www.engineeringforchange.org/news/soapen-playing-soap-prevents-disease-chil
dhood-mortality/. [Accessed: 25- Mar- 2019].
42
Appendices
A Sensor System
43
44
B Schematics
45
C LiDAR Code
port1 = serial.Serial('COM6' , 115200)
print(port1.name)
print("Serial Started \n")
def getData():
# port1.write((s + '\n').encode('utf-8'))
46
# EMPTY BUFFER
port1.reset_input_buffer()
i=0
while i < 1000:
line = port1.readline()
if(line[0:3] == ("A,0").encode('utf-8')):
print(line)
break
i += 1
stream = line
for x in range(720):
stream += port1.readline()
print(stream)
return stream
#PHASE 1: Serial STRING -> INT ARRAY (360) 0:??? 1:??? 3:??? 4:???
variable_array = stream.split("\r\n".encode('utf-8'))
info_dict = {} # key = angle value: [distance,signal,count]
for i in range(0,len(variable_array)):
current_line = variable_array[i]
if(len(current_line) == 0):
continue
else:
angle = 0
distance = 0
signal = 0
line_reading = current_line.split(','.encode('utf-8')) # A , ??? , ??? I, ??? S
if(line_reading[0] == 'A'.encode('utf-8')):
if(len(line_reading) >= 2 and False == ('A'.encode('utf-8') in line_reading[1])):
angle = int( line_reading[1] )
if(len(line_reading) >= 3 and line_reading[2] != 'I'.encode('utf-8') and line_reading[2] !=
'S'.encode('utf-8') and False == ('A'.encode('utf-8') in line_reading[2])):
print(line_reading[2])
distance = int( line_reading[2] )
if(len(line_reading) >= 4 and line_reading[3] != 'S'.encode('utf-8') and False ==
('A'.encode('utf-8') in line_reading[3])):
signal = int(line_reading[3])
if(angle in info_dict):
d = info_dict[angle][0]
47
s = info_dict[angle][1]
c = info_dict[angle][2]
info_dict[angle] = [(d*c + distance)/(c+1),(s*c + signal)/(c+1),c+1]
else:
info_dict[angle] = [distance,signal,1]
else:
continue
print(info_dict)
return info_dict
for i in info_dict.keys():
x_values.append(info_dict[i][0] * np.cos(i * np.pi / 180)) # rads
y_values.append(info_dict[i][0] * np.sin(i * np.pi / 180))
return (np.array(x_values),np.array(y_values))
fig = plt.figure(1)
plt.ion()
plt.show()
i=0
while True:
stream = getData()
info_dict1 = data2dict( stream )
x,y = polar2cart(info_dict1)
plt.xlim(-6000,6000)
plt.ylim(-6000,6000)
plt.scatter(x,y,1.5)
plt.pause(1)
48
if(i >= 5):
fig.clear()
i=0
i += 1
#define chipSelect 53
void setup() {
pinMode(13, OUTPUT);
Serial.begin(115200);
// Serial.println("VC0706 Camera snapshot test");
49
// Try to locate the camera
if (!cam.begin()) {
// Serial.println("Camera Found:");
// } else {
// Serial.println("No camera found?");
return;
}
/*
// Print out the camera version information (optional)
char *reply = cam.getVersion();
if (reply == 0) {
Serial.print("Failed to get version");
} else {
Serial.println("-----------------");
Serial.print(reply);
Serial.println("-----------------");
}
*/
// Set the picture size - you can choose one of 640x480, 320x240 or 160x120
// Remember that bigger pictures take longer to transmit!
//cam.setImageSize(VC0706_640x480); // biggest
cam.setImageSize(VC0706_320x240); // medium
//cam.setImageSize(VC0706_160x120); // small
// You can read the size back from the camera (optional, but maybe useful?)
uint8_t imgsize = cam.getImageSize();
// Serial.print("Image size: ");
// if (imgsize == VC0706_640x480) Serial.println("640x480");
// if (imgsize == VC0706_320x240) Serial.println("320x240");
// if (imgsize == VC0706_160x120) Serial.println("160x120");
// Serial.println("Snap in 3 secs...");
delay(3000);
if (! cam.takePicture())
return;
// Serial.println("Failed to snap!");
50
// else
// Serial.println("Picture taken!");
51
time = millis() - time;
// Serial.println("done!");
// Serial.print(time); Serial.println(" ms elapsed");
//Decoding
52
// Fill the buffer with zeros
initBuff(dataBuff);
53
// Repeat for all pixels in the current MCU
while(mcuPixels--) {
// Read the color of the pixel as 16-bit integer
color = *pImg++;
54
void initBuff(char* buff) {
for(int i = 0; i < 240; i++) {
buff[i] = 0;
}
}
void loop() {
}
Serial port;
void setup() {
// Set the default window size to 200 by 200 pixels
size(200, 200);
55
int x, y, mcuX, mcuY;
// This function will be called every time the Serial port receives 240 bytes
void serialEvent(Serial port) {
// Read the data into buffer
port.readBytes(byteBuffer);
56
// Remove all whitespace characters
trimmed = inString.trim();
} else if(inString.indexOf("$ITDAT") == 0) {
// Data packet
57
// Convert 16-bit color into RGB values
r = ((inColor & 0xF800) >> 11) * 8;
g = ((inColor & 0x07E0) >> 5) * 4;
b = ((inColor & 0x001F) >> 0) * 8;
if(x == mcuWidth) {
// MCU row is complete, move onto the next one
x = 0;
y++;
}
if(y == mcuHeight) {
// MCU is complete, move onto the next one
x = 0;
y = 0;
mcuX++;
}
if(mcuX == jpegMCUSPerRow) {
// Line of MCUs is complete, move onto the next one
x = 0;
y = 0;
mcuX = 0;
mcuY++;
}
if(mcuY == jpegMCUSPerCol) {
// The entire image is complete
received = true;
}
}
}
}
void draw() {
// If we received a full image, start the whole process again
58
if(received) {
// Reset coordinates
x = 0;
y = 0;
mcuX = 0;
mcuY = 0;
port1.reset_input_buffer()
line = port1.readline()
f_line = line.split("\r\n".encode('utf-8'))
s_line = f_line[0].split(",".encode('utf-8'))
data_arr = []
if( len(s_line) == 65 ):
m=0
for i in range(0,8):
q = []
for j in range(0,8):
q.append(float(s_line[m]))
m += 1
59
data_arr.append(q)
return data_arr
s = (max_temp - min_temp)/255
s=8
return int(red),int(green),int(blue)
60
W,H = 400,400
print((data_arr))
for i in range(0,8):
for j in range(0,8):
red,green,blue = color(data_arr[i][j])
pixels1[i , j] = (red, green , blue) #(int(6.5 * data_arr[i][j]), 10 , int(255 - (6.5 * data_arr[i][j])))
plt.imshow(img1)
plt.pause(1)
fig.clear()
fig = plt.figure(1)
plt.ion()
plt.show()
while True:
imgData(getData())
61