Skip to content

Machine Learning: from your vacation photos to your personal radiology assistant

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone

MacLearning
Quiet please. Machine is learning.
You’ve probably heard of Machine Learning. It is a very hot topic, and for all the good reasons. But what does it stand for?

Let me give you a simple introduction with a story that happen to me at work.

My legendary colleague (we’ll call him Dr. A) and I, surprisingly had some free time during the last night shift in emergency department. I saw Dr A using Microsoft Word document on one of the computers to paste interesting facts about the country he was planning to visit during his upcoming holiday. Dr A being a legend, I couldn’t let him proceed in this fashion, obviously so outdated and unproductive. So, I got him hooked up on Google Docs.

Google Docs is just like your Microsoft Office package, but in the cloud. You don’t have to install anything on the computer. You don’t even need to have a computer. You can access the files you create (you can create text documents, spreadsheets and slides, to name a few) from anywhere (just add Internet connection), and from various different devices (computer, phone, and tablet). Finally. it is completely free to use, and fantastic for collaboration.

Dr A loved it and wanted more. Therefor Google Drive was in order to help him organise and share other documents, like pdfs and pictures.

Google Drive is cloud storage, your virtual online hard drive that you can access from any device that is connected to the Internet. Again it is a free and very convenient way to have all the documents you need with you all the time. If you anticipate you won’t have a steady Internet connection during let’s say your trip, you can make sure crucial documents are available offline on your computer or a mobile device.

One of the most common files people like storing in the cloud are obviously photos. Everyone loves photos, and with modern smartphones it has never been easier to capture memories. So easy, that most of us are constantly running out of space on our phones. Sure you can download these regularly to your home computer, but why bother when they can be automatically uploaded to your online storage right after you take them. And here comes our last piece of the puzzle, Google Photos.

While backing up photos online is nice, it’s what Google Photos is able to automatically do with them that is AMAZING. Dr A was blown away when he realised that it, without any input for him, organised his photos into events, made cool videos and photo books, and recognised places where the photos were taken. But he went ballistic when I showed him how this piece of software can search his entire library of thousands of photos and find people, beaches, animals, cars, colours, and most things that came to our mind to search.

Google Photos Beaches
Google Photos can search thousands of photos to find things/people/activities in them.

This is done by Machine Learning. It is not based on brute force, it is based on smarts. What do I mean? Dr A’s first instinct was that it recognised beaches/sea by knowing the coordinates of the photo. Nope. Maybe this software is hard coded, meaning that the programmers inputed all the intricate details of what a beach is and tried to thing of all the instances software could encounter. Nope. Google Photos recognises a beach just like a human viewer would. It learned what the beach is, and actually it is still learning and getting better at it. This is the basic difference between learning computer algorithm and common computer algorithms. Your usual algorithms can only do what programmers told them to do and nothing else. Learning algorithms actually learn new things, by programming themselves. They write new code by themselves to tackle new challenges. And the more challenged they are, the more data you give them, they get better.

One example of Machine Learning and how the way it interprets the World gets better by more data, is autopilot function recently introduced by Tesla. Tesla electric cars have just received this update that let’s the car drive itself on open road by recognising road lanes, other cars and the surroundings.

When it came out, some users reported that the software would get confused at times, wanting to take a wrong turn, and not recognising the car’s surroundings correctly. This is something programmers from Tesla would have a hard time correcting, if they hard coded its autopilot. But since they actually made their autopilot software as a learner, users have very quickly started reporting that their cars have learned the lessons from their previous mistakes and stopped doing them. So the more people drive, the better everyone’s autopilots get. Just as my Google Photos will get better now that Dr A has started uploading his gorgeous photos. Thank you my friend.

What does this have to do with medicine, you ask. Well, learner software can be applied to anything and there is fundamentally not much difference between recognising a beach on a photo and recognising a tumour or other anomaly on a CT scan. Think this is science fiction? Think again, because many researchers are working on this. Just look at the graph depicting the number of articles published in medical literature on the topic of machine learning over the years.

Machine Learning on Pubmed
Click to enlarge

There is no doubt that Dr A and I will use a virtual radiology assistant during our emergency department shifts in the future. It will suggest potential diagnosis from the scans we ordered. We would surely laugh then about how our swimming suit photos helped teach it.

The following two tabs change content below.
Emergency physician, blogger (ivor-kovic.com), innovator (ivormedical.com), researcher (researchgate.net/profile/Ivor_Kovic), speaker (youtu.be/Q-E-B3Pc8mk)...

Latest posts by Ivor Kovic (see all)