Supply chain intake sandboxes
So to speculate on what a system of intake sandboxes would be. How it might work. I’d like to do it without breaking budget, but given this is rambling speculation I’m not going to worry about that here. Let’s consider as a basis for this exercise the SolarWinds breach and Sunspot as the model of the currently most successful supply chain attack.
So one of the interesting details of Sunspot is how the execution is delayed a specific amount of time (10 - 12 days), which implies a high security intake sandbox environment that doesn’t last longer than 9 days before releasing a binary to production environments. Probably some internal SLA for a high security environment that was the primary target of the attack (note how a secondary piece of information leaked or stolen provides the security bypass).
So knowing this, how would our sandbox work?
Well it needs to be continuous, more like a persistent staging environment for all internal systems and processes. We can’t have a predictable period of time in which we carefully scrutinize application behavior because that can be determined and easily bypassed.
Does this mean some arbitrary, internal 7 day SLA can’t be honored?
I think you can still honor it, but it doesn’t mean you stop watching the new patch or binary very carefully. It also means we want to keep things in our sandbox longer if we can, but there is probably little stopping us from pushing something through quickly when needed. Our sandbox might need additional mechanisms outside of it so that we can rollback to a known good state if one day that patch we installed months ago suddenly start’s trying to beacon out to a sketchy data center in Saychelles. The SolarWinds compromise is proof positive that you can be infected for a long time and never know it.
And our sandbox needs to look like production in a way that isn’t easily distinguishable. We can’t just throw a VM on an isolated box and expect it to be adequate. VM’s are pretty much required here though, so that implies that our production environment also needs to be virtualized. There needs to be a network that looks like were we are going to run the software. This means an AD domain controller if it needs to live in a Windows network. Possibly a traffic generator of some sort so the system state isn’t idle too much. We’ll also need other active endpoints that look like our production network. Now none of these fake sandbox systems should be real, in fact they should probably just be very convincing honeypots. Or maybe ephemeral devices with a very short lifetime that are constantly getting re-hydrated and replaced, with the system state continuously examined on every tear down.
Entry into the sandbox should also trigger different analysis workflows, such as secondary checks on the file hashes through separate channels. And even though it’s not very effective, signature scanning engines are probably required. It might be worth while to be able to add the scanning engines into the sandbox periodically and watch for any changes once they are in the environment.
Binary analysis is also something we should consider, or failing that VM level debugging. We would want to be looking for new network capabilities and process detection/manipulation capabilities. So being able to investigate our new or updated application at that very low level would give us a real chance of detecting something like Sunburst.
Ok, enough rambling speculation.
This sandbox seems like it would be a major initiative, most likely needing it’s own support and engineering team. Even though I haven’t considered budget while thinking through this, it’s certainly going to be considerable. When I think of the how feasible this would be, I really think only the largest enterprises could consider a supply chain intake sandbox like this. Balanced against risk, most enterprises would not want to go through the effort to secure their upstream supply chain in this way. Maybe if a security vendor was to make such a system and spread the cost across enterprise clients? Maybe.