top of page

Why Artificial Intelligence? - Honours Blog 23

Updated: May 4, 2020

A wee while ago I hit a crossroad; do I try and build the 'blind spot detection' technology, or do I just smoke-and-mirrors the entire thing in hopes of making it look like it works how it's supposed to? I decided (most might argue stupidly) to try and build this technology without knowing a single thing about where to start. It took a few weeks of research, meetings with computing lecturers, asking in a load of online forums, but eventually, I came to the conclusion that it was possible. Difficult, but possible.

I'd need to use Artificial Intelligence (AI) to get this working, a way of getting a computer to recognise inputs and then do something with that information, in this case notifying the user. However, due to other parts of this project taking priority and a pandemic taking over the world, the AI side of things was put on the back foot.

But just yesterday I sourced, and ordered, all the parts to be able to build a form of AI technology, albeit not blind spot detection, but something close enough to prove it's possible given a bit more hardware and software (and brains).

That's why I've chosen to write this blog now explaining why AI is a vital part of this project, as over the coming weeks there will be quite a few blogs following along the technology's progression.

So what is AI?


AI is a way of getting computers to learn and/or recognise certain inputs and then process that information. It's often referred to as Machine Learning, which makes it a bit easier to understand. The way it was explained to me is if you wanted to teach a machine (computer) to differentiate between a cat and a dog, you would first have to teach it what a cat and a dog looks like. To do this, you'd use a machine learning software, such as TensorFlow, and write what's called a model. Using this model, you'd teach the computer what a dog and cat look like by uploading up to thousands of photos of different dogs and cats and identifying each one. This way, the model will know what a dog and cat look like, so when shown will be able to identify each one. Obviously, this is hugely oversimplified, but it's the easiest way to describe it. Models similar to the one I've just described are used in all sorts of everyday products, such as Alexa for recognising voice commands, to cars for recognising lane detection.

Some products even have the ability to record what they don't recognise or new situations and then share these amongst more of the same product. When Tesla's self-driving cars hit the road, the only information they had to go off was the information Tesla had 'taught' it during testing. As more hit the road and the self-driving technology was confronted with new situations, these situations were recorded and shared amongst all the cars, making them more intelligent.


So why AI?


Since AI can 'learn' to recognise different situations, it's the ideal technology to allow such a kit to recognise different blind spots. Since blind spots differ from vehicle to vehicle, in theory, it would be possible to teach a machine to learn the difference between each one and then flag to the user when they happen to be sitting in one.

Also, as time goes on, it reduces the risk of false positives. With sensors (which is one of the only other ways I figured I could do this) it would be difficult to cancel out false positives. Bushes, pedestrians, bins, walls etc. would all flag as vehicles as ultrasonic sensors wouldn't be able to differentiate between them. After time AI would recognise the difference between a truck and a wall and only notify the user if there was a real potential hazard, rather than a potential hazard if that makes sense...


What next?


So what next? Given the fact that I'm now working from home and don't have access to all the required software and hardware, I, unfortunately, won't be able to build the blind spot detection software, but I should be able to get some way towards it. I've decided to build a human recognition software using online, open-source materials that are available to me. Yes, I know that humans aren't blind spots and I know that it would make more sense to build a vehicle recognition device, but given the fact we're only meant to go outside for essential travel, human recognition will prove the same point that vehicle recognition would - that this products software is possible to produce.


How I'm going to do it

To do this, I've had to order a few different components. I've already had confirmation that a few have shipped which is good news given everything going on, and hopefully, the rest will make it (touch wood).


I've bought:

1x Arduino Nano BLE 33 Sense (best for AI compatibility).

1x ArduCAM Mini 5MP Camera module.

1x Breadboard

A bunch of Male/Female Jumper wires.


Softwares I'll be using:

Tensorflow

Arduino IDE

Fritzing

Obviously, I haven't started building yet, so I have nothing to report on the progress, but when I do, I'll be sure to post it on the blog.


Hopefully, this blog has answered a few questions as to why AI is the best way forward regarding technology for blind-spot detection, and why I can't actually make it!


That's all for this blog.

Thanks!


Cover Photo Image: Chamaki, F. (2018) White building with data has a better idea text signage. Available at: https://unsplash.com/photos/1K6IQsQbizI (Accessed: 02/04/2020).

8 views0 comments

Recent Posts

See All

Comments


bottom of page