Author: Gilbertine Onfroi

“2017 EU Security Awareness Summit – After Action Report”

  The SANS EU Security Awareness Summit is an annual event that brings together security awareness professionals and industry experts from around the world to share and learn from each other how to manage human risk. This year was the largest event ever in Europe, bringing together over 130 awareness professionals for a jammed pack … Continue reading 2017 EU Security Awareness Summit – After Action Report

from lspitzner

How public-private partnerships can combat cyber adversaries

For several years now, policymakers and practitioners from governments, CERTs, and the security industry have been speaking about the importance of public-private partnerships as an essential part of combating cyber threats. It is impossible to attend a security conference without a keynote presenter talking about it. In fact, these conferences increasingly include sessions or entire tracks dedicated to the topic. During the three conferences Ive attended since Junetwo US Department of Defense symposia, and NATOs annual Information Symposium in Belgium, the message has been consistent: public-private information-sharing is crucial to combat cyber adversaries and protect users and systems.

Unfortunately, we stink at it. Information-sharing is the Charlie Brown football of cyber: we keep running toward it only to fall flat on our backs as attackers continually pursue us. Just wait til next year. Its become easier to talk about the need to improve information-sharing than to actually make it work, and its now the technology industrys convenient crutch. Why? Because no one owns it, so no one is accountable. I suspect we each have our own definition of what information-sharing means, and of what success looks like. Without a sharp vision, can we really expect it to happen?

So, what can be done?

First, some good news: the security industry wants to do this–to partner with governments and CERTs. So, when we talk about it at conferences, or when a humble security advisor in Redmond blogs about it, its because we are committed to finding a solution. Microsoft recently hosted BlueHat, where hundreds of malware hunters, threat analysts, reverse engineers, and product developers from the industry put aside competitive priorities to exchange ideas and build partnerships. In my ten years with Microsoft, Ive directly participated in and led information-sharing initiatives that we established for the very purpose of advancing information assurance and protecting cyberspace. In fact, in 2013, Microsoft created a single legal and programmatic framework to address this issue, the Government Security Program.

For the partnership to work, it is important to understand and anticipate the requirements and needs of government agencies. For example, we need to consider cyber threat information, YARA rules, attacker campaign details, IP address, host, network traffic, and the like.

What can governments and CERTs do to better partner with industry?

  • Be flexible, especially on the terms. Communicate. Prioritize. In my experience, the mean-time-to-signature for a government to negotiate an info-sharing agreement with Microsoft is between six months and THREE YEARS.
  • Prioritize information sharing. If this is already a priority, close the gap. I fear governments attorneys are not sufficiently aware of how important the agreements are to their constituents. The information-sharing agreements may well be non-traditional agreements, but if information-sharing is truly a priority, lets standardize and expedite the agreements. Start by reading the 6 Nov Department of Homeland Security OIG report, DHS Can Improve Cyber Threat Information-Sharing document.
  • Develop and share with industry partners a plan to show how government agencies will consume and use our data. Let industry help government and CERTs improve our collective ROI. Before asking for data, lets ensure it will be impactful.
  • Develop KPIs to measure whether an information-sharing initiative is making a difference, quantitative or qualitative. In industry, we could do a better job at this, as we generally assume that were providing information for the right reason. However, I frequently question whether our efforts make a real difference. Whether we look for mean-time-to-detection improvements or other metrics, this is an area for improvement.
  • Commit to feedback. Public-private information-sharing implies two-way communication. Understand that more companies are making feedback a criterion to justify continuing investment in these not-for-profit engagements. Feedback helps us justify up the chain the efficacy of efforts that we know are important. It also improves two-way trust and contributes to a virtuous cycle of more and closer information-sharing. At Microsoft, we require structured feedback as the price of entry for a few of our programs.
  • Balance interests in understanding todays and tomorrows threats with an equal commitment to lock down what is currently owned.(My favorite) Information-sharing usually includes going after threat actors and understanding whats coming next. Thats important, but in an assume compromise environment, we need to continue to hammer on the basics:
    • Patch.If an integrator or on-site provider indicates patching and upgrading will break an application, and if that is used as an excuse not to patch, that is a problem. Authoritative third-parties such as US-CERT, SANS, and others recommend a 48- to 72-hour patch cycle. Review www.microsoft.com/secure to learn more.
      • Review www.microsoft.com/sdl to learn more about tackling this issue even earlier in the IT development cycle, and how to have important conversations with contractors, subcontractors,and ISVs in the software and services supply chain.
    • Reduce administrative privilege. This is especially important for contractor or vendor accounts. Up to 90 percent of breaches come from credential compromise. This is largely caused by a lack of, or obsolete, administrative, physical and technical controls to sensitive assets. Basic information-sharing demands that we focus on this. Here is guidance regarding securing access.

Ultimately, we in the industry can better serve governments and CERTs by incentivizing migrations to newer platforms which offer more built-in security; and that are more securely developed. As we think about improving information-sharing, lets be clear that this includes not only sharing technical details about threats and actors but also guidance on making governments fundamentally more secure on newer and more secure technologies.

 

from Jenny Erie

[SANS ISC] Tracking Newly Registered Domains

I published the following diary on isc.sans.org: “Tracking Newly Registered Domains“:

Here is the next step in my series of diaries related to domain names. After tracking suspicious domains with a dashboard and proactively searching for malicious domains, let’s focus on newly registered domains. They are a huge number of domain registrations performed every day (on average a few thousand per day all TLD’s combined). Why focus on new domains? With the multiple DGA (“Domain Generation Algorithms”) used by malware families, it is useful to track newly created domains and correlate them with your local resolvers’ logs. You could detect some emerging threats or suspicious activities… [Read more]

[The post [SANS ISC] Tracking Newly Registered Domains has been first published on /dev/random]

from Xavier

Botconf 2017 Wrap-Up Day #3

And this is already the end of Botconf. Time for my last wrap-up. The day started a little bit later to allow some people to recover from the social event. It started at 09:40 with a talk presented by Anthony Kasza, from PaloAlto Networks: “Formatting for Justice: Crime Doesn’t Pay, Neither Does Rich Text“. Everybody knows the RTF format… even more since the famous CVE-2017-0199. But what’s inside an RTF document? As the name says, it is used to format text. It was created by Microsoft in 1987. It has similarities with HTML:

RTF vs HTML

Entities are represented with ‘{‘ and ‘}’. Example:

{\iThis is some italic text}

There are control words like “\rtf”, “\info”, “\author”, “\company”, “\i”, “\AK”, …. It is easy to obfuscate such document with extra whitespaces, headers or with nested elements:

{\rtf [\info]] == {\rtf }}

This means that writing signature is complex. Also, just rename the document with a .doc extension and it will be opened by Word. How to generate RTF documents? They are the official “tools” like Microsoft or Wordpad but they are, of course, plenty of malicious tools:

  • 2017-0199 builder
  • wingd/stone/ooo
  • Sofacy, Monsoon, MWI
  • Ancalog, AK builder

What about analysis tools? Here also, it is easy to build a toolbox with nice tools: rtfdump, rtfobj, pyRTF, YARA are some of them. To write good signatures, Anthony suggested focussing on suspicious words:

  •  \info
  • \object
  • DDEAUTO
  • \pict
  • \insrsid or \rsidtbl

DDEAUTO is a good candidate for a while and is seen as the “most annoying bug of the year” for its inclusion in everything (RTF & other documents, e-mail, calendar entries…). Anthony finished his talk by providing a challenge based on an RTF file.

The next talk was presented byPaul Jung: “PWS, Common, Ugly but Effective“. PWS also know as “info stealer” are a very common piece of malware. They steal credentials from many sources (browsers, files, registries, wallets, etc).
PWS

They also offer “bonus” features like screenshot grabbers or keylogger. How to find them? Buy them, find a cracked one or open sources. Some of them have also promotional videos on Youtube! A PWS is based on a builder that generates a specific binary based on the config file, it is delivered via protocols like email, HTTP and data are managed via a control panel. Paul reviewed some well-known PWS like JPro Crack Stealer, Pony (the most famous), Predator Pain or Agent Tesla. The last one promotes itself as “not being a malware”. Some of them support more than 130 different applications to steal passwords from. Some do not reinvent the wheel and just use external tools (ex: the Nirsoft suite). If it is difficult to detect them before the infection, it’s quite easy to spot them based on the noise they generate in log files. They use specific queries:

  • “POST /fre.php” for Lokibot
  • “POST /gate.php” for Pony or Zeus

Very nice presentation!

After the first coffee refill break, Paul Rascagnères presented “Nyetya Malware & MeDoc Connection“. The presentation was a recap of the bad story that affected Ukraine a few months ago. It started with a phone call saying “We need help“. They received some info to start the investigation but their telemetry did not return anything juicy (Talos collects a huge amount of data to build their telemetry). Paul explained the case of M.E. Doc, a company providing a Windows application for tax processing. The company servers were compromised and the software was modified. Then, Paul reviewed the Nytia malware. It used WMI, PsExec, EternalBlue, EternalRomance and scanned ranges of IP to infect more computers. It also used a modified version of Mimikatz. Note that Nyetya cleared the infected host logs. This is a good reminder to always push logs on an external system to prevent losing pieces of evidence.

The next talk was about a system to track the Locky ransomware based on its DGA: “Math + GPU + DNS = Cracking Locky Seeds in Real Time without Analyzing Samples“. Yohai Einav Alexey Sarychev explained how they solved the problem to detect as fast as possible new variation of domain names used by the Locky ransomware. The challenges were:

  • To get the DGA  (it’s public now)
  • To be able to process a vast search space. The namespace could be enormous (from 3 digit seed to 4 then 5, 6). There is a scalability problem.
  • Mapping the ambiguity (and avoid collisions with other DGA’s)

So solution they developed is based on GPU (for maximum speed). If you’re interested in the Locky DGA, you can have a look at their dataset.

The next talk was, for me, the best of the day because it contained a lot of useful information that many people can immediately reuse in their environment to improve the detection of malicious behaviour or to improve their DFIR process. It was titled “Hunting Attacker Activities – Methods for Discovering, Detecting Lateral Movements” and presented by Keisuke Muda and Shusei Tomonaga. Based on their investigations, they explained how attackers can perform lateral movement inside a network just be using standard Windows tools (that, by default, are not flagged as malicious by the antivirus).

https://github.com/baderj/domain_generation_algorithms/tree/master/locky

They presented multiple examples of commands or small scripts used to scan, pivot, cover tracks, etc. Then they explained how to detect this kind of activity. They made a good comparison of the standard Windows audit log versus the well-known Sysmon tool. They presented pro & con of each solution and the conclusion could be that, for maximum detection, you need both. There were so many examples that it’s not possible to list them here. I just recommend you to have a look at the documents available online:

It was an amazing presentation!

After the lunch, Jaeson Schultz, also from Talos, presented “Malware, Penny Stocks, Pharma Spam – Necurs Delivers“. The talk was a good review of the biggest spam botnet active. Just some numbers collected from multiple campaigns; 2.1 messages, 1M unique sender IP addresses from 216 countries/territories. The top countries are India, Vietnam, Iran and Pakistan. Jaeson explained that the re-use of IP address is so low that it’s difficult to maintain blacklists.

IP Addresses Reuse

How do the bad guys send emails? They use harvested accounts (of course) but also auto-generated addresses and common / role-based accounts. That’s why the use of catch-all mailboxes is useful. Usually, big campaigns are launched from Monday to Friday and regular campaigns are constantly running at a low speed. Jaeson presented many examples of spam, attachments. Good review with entertaining slides.

Then, Łukasz Siewierski presented “Thinking Outside of the (Sand)box“. Łukasz is working for Google (Play Store) and analyze applications. He said that all applications submitted to Google are reviewed from a security point of view. Android has many security features: SE linux, application sandbox, permission model, verified boot, (K)ASLR, Seccomp but the presentation focused on the sandbox. First, why is there a sandboxing system? To prevent spyware to access other applications data, to prevent applications to pose as other ones, make easy to attribute action to specific apps and to allow strict policy enforcement.  But how to break the sandbox? First, the malware can ask users for a number of really excessive permissions. In this case, you just have to wait and cross your fingers that he will click “Allow”. Another method is to use Xposed. I already heard about this framework at Hack in the Box. It can prevent apps to be displayed in the list of installed applications. It gives any application every permission but there is one big drawback: the victim MUST install Xposed! The other method is to root the phone, inject code into other processes and profit. Łukasz explained different techniques to perform injection on Android but it’s not easy. Even more since the release of “Nougat” which introduced now mitigations techniques.

The last slot was assigned to Robert Simmons who presented “Advanced Threat Hunting“. It was very interesting because Robert gave nice tips to improve the process of threat hunting. It can require a lot of resources that are … limited! We have small teams with limited resources and limited time. He also gave tips to better share information. A good example is YARA rules. Everybody has a set of YARA rules in private directories, on laptops, etc. Why not store them in a central repository like a gitlab server? Many other tips were given that are worth a read if you are performing threat hunting.

The event was close to the classic kind word of the team. You can already book your agenda for the 6th edition that will be held in Toulouse!

 

The Botconf Crew

[The post Botconf 2017 Wrap-Up Day #3 has been first published on /dev/random]

from Xavier