Cybersecurity and Teaching The Machine

If you attended RSA Conference this year, you probably heard many vendors talk about machine learning for cybersecurity. Or if you missed RSA, you may have caught some of the articles on artificial intelligence to detect insider threats with terms like user behavior analytics. You may have then started to think about how these algorithms work and the difference between supervised vs. un-supervised models. You may have even started to look into k-means and dig into the differences between dynamic Bayes vs. empirical Bayes. And you may have even suddenly felt your job as a security professional required new data science skills. Well, today I would like to focus this article on one important piece of that puzzle: Teaching The Machine.

What is teaching the machine?

While teaching the machine is not a formal term that I am aware of, what I mean by that is the process that people — data scientists — go through to convert their expertise of detecting anomalies in patterns of data to something that machines understand and learn. It’s a process by which machines learn how to detect these cybersecurity patterns on their own. And although a data scientist is not typically a subject matter expert on teaching cybersecurity, that person can be a great resource to convert human interpretations to computer algorithms. 

Crawl, walk, run

Humans don’t get up and walk on the same day they are born, unlike some creatures that remarkably can. There is a process through which children learn to crawl, stand, walk and run. This process usually goes in parallel with other learnings, gestures like “bye-bye,” “give me,” and “no-no.” Gestures then get converted to words, and words into phrases. Teaching a machine is not that different; there is a process over iterations to teach, observe, teach more, observe more, with the goal that the machine can get to the point where it “runs” on its own.

Group learning

If I have not lost you so far, this is where things start to get more interesting. Most of us went to some type of school for group learning. We sat next to peers, then listened and learned as the teacher addressed all students. This is also where the analogy starts to fade away. Clearly, we don’t send our machines off to school.  So why shouldn’t we build a “school” for group machine learning? Why shouldn’t we apply the learnings from one machine and clone them to another machine?

While we can’t just connect a wire between two kids and transfer all the knowledge from one to the other, we can do this with machines and we should. What if we could share everything our machine has learned about detecting cybersecurity threats with other machines. Wouldn’t that be great? What if there was an open source initiative to share machine algorithms and open machine data models? Thankfully there is such an initiative and it’s named Apache Spot. And one of its goals is to tackle the challenge of group machine learning. 

Apache Spot, collaboration between the good guys

Apache Spot is in its early stages yet it already has all the potential to be the platform where the good guys collaborate, sharing models and algorithms to find the bad actors. Think of it as a foundation for detecting and preventing cybersecurity threats. And the good news is not everyone who collaborates on Apache Spot needs to be a data scientist. In fact, one of the best ways to support the effort is to download, install and run the platform on your own, then use the predefined algorithms and models to provide feedback on your results.

You can be a force for change without having to learn how Latent Dirichlet Allocation or other algorithms work. Of course we already know the bad guys collaborate, share code, and share secrets. The good guys need to unite and do the same, and Apache Spot wants to — and can —be that uniting force.

One Large Distributed System

Cybersecurity should not be a competitive differentiator between organizations and services. Why should you be forced to choose Bank A instead of Bank B only because Bank A is more secure? Wouldn’t it be great if all banks, healthcare providers, telecommunication systems, and governments shared a common platform for cybersecurity with built-in and continually improving cybersecurity machine models? We should, in fact, be able to expect the best security processes and services regardless of the industry.

When you think in these terms of collaboration at such a grand scale, we are then no longer teaching individual machines. Rather, we are effectively teaching one large distributed system. This is where I see Teaching The Machine as one of the most important pieces — if not the most important — of the cybersecurity puzzle. It is the common thread that ties all industries together in the critical effort of doing business securely.

Please join me in my next contribution where I will dive into more details on cybersecurity using Machine Learning.

view counter
image
Eddie Garcia is an information security architect at Cloudera, a provider of enterprise analytic data management, where he helps enterprise customers reduce security and compliance risks associated with sensitive data sets stored and accessed in Apache Hadoop environments. He was formerly the VP of InfoSec and Engineering for Gazzang prior to its acquisition by Cloudera. He was the chief architect of the Gazzang zNcrypt product and is author of four issued and provisional patents for data security. Prior to Gazzang, he was responsible for Enterprise Architecture projects that helped AMD’s distribution and OEM partners securely collaborate over secure networks with single sign-on. He holds an engineering degree in computer science from the Instituto Tecnologico y de Estudios Superiores de Monterrey.
Previous Columns by Eddie Garcia:
Tags:
Original author: Eddie Garcia