Security Engineering In DevOps

Here we are with another early in the morning ultra-caffeinated high level picture (or rant) comming to you from the depths of the Internet.

Today we will discuss how to incorporate security engineering techniques in our work as developers, system administrators and those people in between we call DevOps people.

The DevOps dream

In the beginning the DevOps buzzword began with the idea that there were a lot of advantages of improving the communications between the development and the operations domain in software engineering.

This meant for the developers that the needed to take into consideration infrastructure details such as service scalability, networking and data management into their designs.

For the system administrators this meant that they needed to stop seeing software as blackboxes operated by configuration files, as well as incorporating other kind of practices that were originally done in software engineering such as version control and the idea that infrastructure itself and operatios could be defined as code by tools such as Terraform or Ansible.

In general, the approach was to break silos, improve communication and expand conciousness about all the relevant aspects of the system for everyone involved.

In some places, this meant that becoming a fullstack developer implied added resposibilities such as infrastructure development by means of mastering how to work with a cloud solution of some kind or a container orchestration system such as Kubernetes.

When adding additional prefixes to these buzzwords, we are usually thinking in terms of priority.

  • DevSecOps: The development comes first, security second, then we will figure out how to deploy it.
  • SecDevOps: Security comes first, then development and finally operations.

I personally think that this way of thinking is wrong and misses the point about the original spirit of DevOps.

DevOps was a backslash in an age of increasing specialization push in the tech industry, specially in big tech.

Specialization is not bad by itself, but often leads to the kinds of problems that the DevOps movement in its beginning tried to solve, such as information silos, system redundancy, buroaucracy and arbitrarily large complexity.

For example, a team might only know about the immediate modules or services it is interacting with to implement their very focused and concrete use case, nothing more, nothing less.

This would mean that they would be unaware of existing problems or solutions occurring at the other side or the product or the organization and that they would be completely detached about how their system was actually being used in order to make improvements.

Instead, I would propose that systems must be ideally considered as a whole and every person involved in a project should have a more or less complete picture of how everything fits together.

Once you have a complete picture, now you can do DevOps and build secure systems.

Yeah, I know that in reality this is seldom a possible thing to achieve just due to the sheer complexity of modern systems and specially the ones found within red taped corporate environments.

Assuming that your team members all have the same mindset and the pride in doing quality work, there are incentives in the organization to keep parts of the system completely secret, chaos, lack of updated or meaningful documentation that push against the values of transparency, clarity and simplicity that would make this way of working practical.

The most realistic middle way is to segment systems in a way that is manageable by a small team and provide clear interfaces and responsibilities between each part of the system, which is the work of a good systems architect.

Now, how do we fit security operations and secure design into the picture?

You essentially incorporate security engineering into you existing activities.

Threat modeling

Most of the software being developed is not made by teams well versed in a security engineering culture.

This means that the way to building and managing more secure systems must be an incremental change in mindset and conciousness in those teams.

Better than proposing a vendor or some specific tool or letting already exhausted and burnout teams be led astray by paranoia, it is usually far more productive to do a threat model in order to figure out what are the trust boundaries of the system and what happens when they are breached.

You do this in a blackboard with your teammates at some periodic timeline or after a milestone has been achieved and going through the STRIDE array of threats.

STRIDE is essentially the list of reasons why you will be on-call for the weekend and why you will spend them fixing bugs or scalling out your system.

  • Spoofing: Someone or something impersonates someone or something it is not.
  • Tampering: Someone or something modifies information they should not be modifiying.
  • Repudiation: Someone or something can deny actions they have effectively done because of the absense of proof.
  • Information disclosure: Someone or something is revealing information that should be confidencial.
  • Denegation of service: Someone or something is denying access to a resource or asset by deliverately exhausting it.
  • Elevation of priviledge: Someone or something with a predefined access level is able to increase their priviledges in some way that is unexpected.

Going through this your team will begin to derive problems and issues to solve and activities to execute, so it might be a good activity to do at the end of a work cycle such as a sprint review.

It will also motivate people to think in terms of the system as a whole, understand what are the trust boundaries and what are the fundamental mechanisms by which they are enforced.

If you make it a game, which will make it more fun and relieve some the pressure and heaviness that these topics usually have, and make it something cool do on Friday.

Once you have a threat model, you are ready to make security engineering.

Doing security engineering

In operations

Generally speaking, good cybersecurity at the operations level is having good system administration skills such as corretly defining the users, groups and roles in order to assure that unauthorized access is never achieved.

This is achieved by having well-defined workflows in which the state of the system priviledges is updated through the network, which is usually the purpose of directory service tools.

Another important thing is to have a good backup and recovery strategy for when a threat is finally realized, such as an enterprise-wide ransomware infection.

Additional improvements can be added in order to mature the security posture of the organization regarding its needs and requirements.

For example, you might also want to monitor all third party tools you depend on and have awareness of their vulnerabilities in some kind of dashboard.

In the operations layer, deploying and maintaining observability of the relevant aspects of the system as well as setting up logging and alerts can be achieved via a whole array of solutions such as Intrussion Prevention/Detection Systems and deep integration with some kind of SIEM to handle all the information and events occurring in your network to assure non-repudiation.

This is a very large topic we might discuss in the future.

In development

Developing your way into software that achieves some objective without considering cybersecurity in our designs is what drove the development of the Internet in its original form into the mess that it has been for the last decades.

The good thing is that it is slowly improving and more developers such as myself are beginning to think in terms of secure systems and protocols.

When developing applications you should strive to understand how the confidenciality, integrity and availability of the data you are handling is being achieved.

Additionaly, developing at the application layer has additional risks.

All the work of the system administration people can be invalidated because the input fields of your website are not being correctly filtered, introducing cross-site scripting or SQL injection vulnerabilities.

At this level most of the solutions seems to be to have secure filters in place for your input data, this is where the challenge lies and where most bug hunting people make their living.

You can still end up developing an insecure application using secure underlying protocols such as TLS or IpSec, so a whole-system view is needed to make the right decisions.

And in order to have this view, you need to ask yourself the right questions.

  • Who, or what, is using this system?
  • Has this message been authenticated, what if it is captured and replayed?
  • Is the data I am handling being encrypted during the communication? How?
  • Is the database automatically encrypting records, or should I do something about it?
  • Could this input data contain something it should not have? Should I trust it?
  • Is this new workflow within the trust boundaries of my application, or should I consider an authentication mechanism?

Yeah, it is not easy and requires very deep knowledge and a lot of experience, that is why as a developer I decided to pursue additional training as a cybersecurity engineer.

Regarding code review, the best thing I recommend is to se tup a local instance of SonarQube and review the code of your project over there before submitting to review by your peers. Extra points of you set up a team-wide instance and define additional rulesets for code quality.

In my opinion, this is the best way I have found about formally detecting mistakes on your code.

We might make a future article about how to achieve this.

Conclusions

Security and specially cybersecurity is hard and perfect security is imposibly hard to achieve in the same way that total test coverage is, we all know that.

It is very hard to implement these improvements in one swipe, and generally they benefit from a strategic deployment in phases as the security posture of the organization matures.

However, there are low effort and high leverage activites we can incorporate into our work to raise the bar, not only to maintain threats away but to improve as engineers and build better and more reliable systems.

whoami

Jaime Romero is a software engineer and cybersecurity expert operating in Western Europe.