This vulnerability is … well, not a vulnerability in a general sense. It is an application design flaw. It occurs when the developer does not really understand information flows in his application. Or even worse, what data the application is working with and how to protect it.
It’s time to blow dust off your CISSP certificate and remember how much time you spent reading the sections about data classification and access model. Biba, Clark–Wilson, Bell-LaPadula, sounds familiar? Why, of course!
Please return your seatbacks to revamp full upright and locked position
I encountered data classification for the first time in 2008 when I was given a task to configure Data Loss Prevention (DLP) for one mid-size company. Implementation procedure looked very straight forward on paper:
- Classify data;
- Define information flows;
- Set appropriate interceptor for each flow;
- Set up policies.
Everything is so simple and obvious so everybody stuck on the very first point =) It is not a technical problem, on the contrary, it is so complicated, because it depends entirely on people. In my case, no one could say what data is important, what is not, and who is even responsible for this classification. Even a set of typical classifications (Public, Sensitive, Private, Confidential и Unclassified) didn’t work.
DLP vendors describe data flows in their product manuals as:
- Data in Motion – Data transmitted by communication networks. For example, an email with an attachment or this blog post when we hit «Publish»;
- Data in Use – Data that is currently processed by a user. For example, an important document, some parts of which were copied to a clipboard, and then sent to a printer;
- Data at Rest – Data that is stored somewhere. For example, a database with confidential documents.
Obviously, each of these flows has their own methods of control. Data in motion can be encrypted, both the letter and the attachment for example. For data in use the right tools must be set on PCs – agents that will monitor and intercept user actions. For data at rest you’ll need an encryption as well as an appropriate access model.
All this knowledge plays its important role in understanding how you can handle A3. The web application you are trying to protect is the same information system where data is processed and transferred. And if you just draw all your information flows on a piece of paper you’ll take the first step for minimizing your risks.
In 2018 we even have software to simulate threats. For example, a simple freeware solution Microsoft Threat Modeling Tool.
The logic here is simple – we draw the information flows of our application as detailed as possible, and then program analyzes them using SDL (Secure Development Lifecycle) methodology. It’s called STRIDE (Spoofing, Tampering, Info Disclosure, Denial Of Service and Elevation Of Privilege).
Main idea behind this algorithm is that there’s a predictable set of threats that can be put into these 5 categories. I’ve drawn a simple example of a web application. We have an access to a web application with HTTP and the application in turn works with the database using SQL queries.
Then we just click «Analyze» and get a brief report on all the possible threats here:
We can also generate detailed report on each one of these:
If you participate in software development, this work should be done at each stage. Hey, it’s called development lifecycle for a reason. It’s your job to address every threat that is found here. If you use a traffic encryption then algorithms must be secure and reliable. Key management must be appropriate. In addition, keep in mind that protection of a personal information is also a subject to regulatory compliance. GDPR is almost here and PCI DSS isn’t going anywhere soon also.