“Building a Champions Program – At the #SecAwareSummit”

Editor’s Note: Cassie Clarkis a security community manager for developers within Salesforce. Sheis one of the speakers for the upcoming Security Awareness Summit 2/3 Aug in Nashville, TN. Below shegives an overview on herupcoming talk onSecurity Champions. Have you heard of the employee engagement training programs called Security Champions? Ever considered starting a Security Champions … Continue reading Building a Champions Program – At the #SecAwareSummit

from lspitzner

FIRST TC Amsterdam 2017 Wrap-Up

Here is my quick wrap-up of the FIRST Technical Colloquium hosted by Cisco in Amsterdam. This is my first participation to a FIRST event. FIRST is an organization helping in incident response as stated on their website:

FIRST is a premier organization and recognized global leader in incident response. Membership in FIRST enables incident response teams to more effectively respond to security incidents by providing access to best practices, tools, and trusted communication with member teams.

The event was organized at Cisco office. Monday was dedicated to a training about incident response and the two next days were dedicated to presentations. All of them focussing on the defence side (“blue team”). Here are a few notes about interesting stuff that I learned.

The first day started with two guys from Facebook: Eric Water @ Matt Moren. They presented the solution developed internally at Facebook to solve the problem of capturing network traffic: “PCAP don’t scale”. In fact, with their solution, it scales! To investigate incidents, PCAPs are often the gold mine. They contain many IOC’s but they also introduce challenges: the disk space, the retention policy, the growing network throughput. When vendors’ solutions don’t fit, it’s time to built your own solution. Ok, only big organizations like Facebook have resources to do this but it’s quite fun. The solution they developed can be seen as a service: “PCAP as a Service”. They started by building the right hardware for sensors and added a cool software layer on top of it. Once collected, interesting PCAPs are analyzed using the Cloudshark service. They explained how they reached top performances by mixing NFS and their GlusterFS solution. Really a cool solution if you have multi-gigabits networks to tap!

The next presentation focused on “internal network monitoring and anomaly detection through host clustering” by Thomas Atterna from TNO. The idea behind this talk was to explain how to monitor also internal traffic. Indeed, in many cases, organizations still focus on the perimeter but internal traffic is also important. We can detect proxies, rogue servers, C2, people trying to pivot, etc. The talk explained how to build clusters of hosts. A cluster of hosts is a group of devices that have the same behaviour like mail servers, database servers, … Then to determine “normal” behaviour per cluster and observe when individual hosts deviate. Clusters are based on the behaviour (the amount of traffic, the number of flows, protocols, …). The model is useful when your network is quite close and stable but much more difficult to implement in an “open” environment (like universities networks).
Then Davide Carnali made a nice review of the Nigerian cybercrime landscape. He explained in details how they prepare their attacks, how they steal credentials, how they deploy the attacking platform (RDP, RAT, VPN, etc). The second part was a step-by-step explanation how they abuse companies to steal (sometimes a lot!) of money. An interesting fact reported by Davide: the time required between the compromisation of a new host (to drop malicious payload) and the generation of new maldocs pointing to this host is only… 3 hours!
The next presentation was performed by Gal Bitensky ( Minerva):  “Vaccination: An Anti-Honeypot Approach”. Gal (re-)explained what the purpose of a honeypot and how they can be defeated. Then, he presented a nice review of ways used by attackers to detect sandboxes. Basically, when a malware detects something “suspicious” (read: which makes it think that it is running in a sandbox), it will just silently exit. Gal had the idea to create a script which creates plenty of artefacts on a Windows system to defeat malware. His tool has been released here.
Paul Alderson (FireEye) presented “Injection without needles: A detailed look at the data being injected into our web browsers”. Basically, it was a huge review of 18 months of web-­inject and other configuration data gathered from several botnets. Nothing really exciting.
The next talk was more interesting… Back to the roots: SWITCH presented their DNS Firewall solution. This is a service they provide not to their members. It is based on DNS RPZ. The idea was to provide the following features:
  • Prevention
  • Detection
  • Awareness

Indeed, when a DNS request is blocked, the user is redirected to a landing page which gives more details about the problem. Note that this can have a collateral issue like blocking a complete domain (and not only specific URLs). This is a great security control to deploy. Note that RPZ support is implemented in many solutions, especially Bind 9.

Finally, the first day ended with a presentation by Tatsuya Ihica from Recruit CSIRT: “Let your CSIRT do malware analysis”. It was a complete review of the platform that they deployed to perform more efficient automatic malware analysis. The project is based on Cuckoo that was heavily modified to match their new requirements.

The second day started with an introduction to the FIRST organization made by Aaron Kaplan, one of the board members. I liked the quote given by Aaron:

If country A does not talk to country B because of ‘cyber’, then a criminal can hide in two countries

Then, the first talk was really interesting: Chris Hall presented “Intelligence Collection Techniques“. After explaining the different sources where intelligence can be collected (open sources, sinkholes, …), he reviewed a serie of tools that he developed to help in the automation of these tasks. His tools addresses:
  • Using the Google API, VT API
  • Paste websites (like pastebin.com)
  • YARA rules
  • DNS typosquatting
  • Whois queries

All the tools are available here. A very nice talk with tips & tricks that you can use immediately in your organization.

The next talk was presented by a Cisco guy, Sunil Amin: “Security Analytics with Network Flows”. Netflow isn’t a new technology. Initially developed by Cisco, they are today a lot of version and forks. Based on the definition of a “flow”: “A layer 3 IP communication between two endpoints during some time period”, we got a review the Netflow. Netflow is valuable to increase the visibility of what’s happening on your networks but it has also some specific points that must be addressed before performing analysis. ex: de-duplication flows. They are many use cases where net flows are useful:
  • Discover RFC1918 address space
  • Discover internal services
  • Look for blacklisted services
  • Reveal reconnaissance
  • Bad behaviours
  • Compromised hosts, pivot
    • HTTP connection to external host
    • SSH reverse shell
    • Port scanning port 445 / 139
I would expect a real case where net flow was used to discover something juicy. The talk ended with a review of tools available to process net flow data: SiLK, nfdump, ntop but log management can also be used like the ELK stack or Apache Spot. Nothing really new but a good reminder.
Then, Joel Snape from BT presented “Discovering (and fixing!) vulnerable systems at scale“. BT, as a major player on the Internet, is facing many issues with compromized hosts (from customers to its own resources). Joel explained the workflow and tools they deployed to help in this huge task. It is based on the following circle: Introduction,  data collection, exploration and remediation (the hardest part!).
I like the description of their “CERT dropbox” which can be deployed at any place on the network to perform the following tasks:
  • Telemetry collection
  • Data exfiltration
  • Network exploration
  • Vulnerability/discovery scanning
An interesting remark from the audience: ISP don’t have only to protect their customers from the wild Internet but also the Internet from their (bad) customers!
Feike Hacqueboard, from TrendMicro, explained:  “How political motivated threat actors attack“. He reviewed some famous stories of compromised organizations (like the French channel TV5) then reviewed the activity of some interesting groups like C-Major or Pawn Storm. A nice review of the Yahoo! OAuth abuse was performed as well as the tab-nabbing attack against OWA services.
Jose Enrique Hernandez (Zenedge) presented “Lessons learned in fighting Targeted Bot Attacks“. After a quick review of what bots are (they are not always malicious – think about the Google crawler bot), he reviewed different techniques to protect web resources from bots and why they often fail, like the JavaScript challenge or the Cloudflare bypass. These are “silent challenges”. Loud challenges are, by examples, CAPTCHA’s. Then Jose explained how to build a good solution to protect your resources:
  • You need a reverse proxy (to be able to change quests on the fly)
  • LUA hooks
  • State db for concurrency
  • Load balancer for scalability
  • fingerprintjs2 / JS Challenge

Finally, two other Cisco guys, Steve McKinney & Eddie Allan presented “Leveraging Event Streaming and Large Scale Analysis to Protect Cisco“. CIsco is collecting a huge amount of data on a daily basis (they speak in Terabytes!). As a Splunk user, they are facing an issue with the indexing licence. To index all these data, they should have extra licenses (and pay a lot of money). They explained how to “pre-process” the data before sending them to Splunk to reduce the noise and the amount of data to index.
The idea is to pub a “black box” between the collectors and Splunk. They explained what’s in this black box with some use cases:
  • WSA logs (350M+ events / day)
  • Passive DNS (7.5TB / day)
  • Users identification
  • osquery data

Some useful tips that gave and that are valid for any log management platform:

  • Don’t assume your data is well-formed and complete
  • Don’t assume your data is always flowing
  • Don’t collect all the things at once
  • Share!

Two intense days full of useful information and tips to better defend your networks and/or collect intelligence. The slides should be published soon.

[The post FIRST TC Amsterdam 2017 Wrap-Up has been first published on /dev/random]

from Xavier

Supply chain security demands closer attention

Often in dangerous situations we initially look outwards and upwards for the greatest threats. Sometimes we should instead be looking inwards and downwards. Supply chain security in information and communication technology (ICT) is exactly one of those situations where detailed introspection could be of benefit to all concerned. The smallest security breach can have disastrous implications, irrespective of whether the attackers’ entry point is within one’s own system or within that of a supplier. ATM breaches, which can expose hundreds of millions of people’s personal information, are one example of how an attack can occur via a contractor.

My experience over the last fifteen or more years of cybersecurity policy work is that in a diverse, globalized and interconnected world, supply chains can pose a major cybersecurity threat if left unmanaged. Many products are built up from elements that are created and modified by different companies in different places. This is as true of software as it is of hardware. Global supply chains create opportunities for the introduction of counterfeit elements or malicious code. The problem is not concentrated in one region and the consequences can be global.

The situation not wholly new nor is it wholly unknown. From Microsoft’s perspective, based on our experience in the cyber supply chain risk management (C-SCRM) space and in line with our broad approach to all cybersecurity issues, the best approach to validating ICT products and components is risk-based. If I was to put forward basic elements of a supply chain risk management stance they would include:

  • A clear understanding of the critical supply chain risks that need to be mitigated, which will require regular evaluation and adjustment as threats or technologies change;
  • Principles and practices that take account of the lifecycle of threats whilst promoting transparency, accountability and trust between companies themselves and between companies and the authorities;
  • An understanding that flexibility is critical, given i) vendors’ differing business models and markets, and ii) that seemingly simple changes in technology can rapid change threat models; and,
  • A holistic approach to C-SRCM-based technical controls, operational controls, and vendor & personnel controls.

In addition to effective risk management, I can see a clear case for international standards in international supply chains. If we recognize that even the smallest weakness in a jurisdiction “over there” might be a way in for cyber criminals “over here”, international standards would be a common basis for judging whether or not a supply chain can be secure in its fundamentals.

Governments considering how to make their ICT supply chains more secure need to solicit industry feedback on their proposals. Indeed, I would argue that public-private partnerships to develop supply chain proposals are the best way to approach the issue. Both states and companies gain by cooperating in the fight against supply chain-led cyberattacks.

Microsoft depends on the trust our customers place in our products and as a multinational company, we understand the relevance of secure cross-border supply chains. So, even if C-SCRM is rarely the first thing considered when looking at cybersecurity, we will continue to make the case for a comprehensive and global approach to securing ICT supply chains that is risk-based, transparent, flexible and standards-led.

from Paul Nicholas

How future policy and regulations will challenge AI

I recently wrote about how radical the incorporation of artificial intelligence (AI) to cybersecurity will be. Technological revolutions are however frequently not as rapid as we think. We tend to see specific moments, from Sputnik in 1957 to the iPhone in 2007, and call them “game changing” – without appreciating the intervening stages of innovation, implementation and regulation, which ultimately result in that breakthrough moment. What can we therefore expect from this iterative and less eye-catching part of AI’s development, looking not just at the technological progress, but its interaction with national policy-making process?

I can see two overlapping, but distinct, perspectives. The first relates to the reality that information and communication technology (ICT) and its applications develop faster than laws. In recent years, examples of social media and/or ride hailing apps have seen this translate into the following regulatory experience:

  1. Innovation: R&D processes arrive at one or many practical options for a technology;
  2. Implementation: These options are applied in the real world, are refined through experience, and begin to spread through major global markets;
  3. Regulation: Governments intervene to defend the status quo or to respond to new categories of problem, e.g. cross-border data flows;
  4. Unanticipated consequences: Policy and technology’s interaction inadvertently harms one or both, e.g. the Wassenaar’s impact on cybersecurity R&D.

AI could follow a similar path. However, unlike e-commerce or the shared economy (but like nanotechnology or genetic engineering) AI actively scares people, so early regulatory interventions are likely. For example, a limited focus on using AI in certain sectors, e.g. defense or pharmaceuticals, might be positioned as more easily managed and controlled than AI’s general application. However, could such a limit really be imposed, particularly in the light of potential for transformative creative leaps that AI seems to promise? I say that would be unlikely – resulting in yet more controls. Leaving aside the fourth stage of unknown unknowns of unanticipated consequences, the third phase, i.e. regulation, would almost inevitably run into trouble of its own by virtue to having to legally define something as unprecedented and mutable as AI. It seems to me, therefore, that even the basic phases of AI’s interaction with regulation could be fraught with problems for innovators, implementers and regulators.

The second, more AI-specific perspective is driven by the way its capabilities will emerge, which I feel will break down into three basic stages:

  1. Distinction: Creation of smarter sensors;
  2. Direction: Automation of human-initiated decision-making;
  3. Delegation: Enablement of entirely independent decision-making.

Smarter sensors will come in various forms, not least as part of the Internet of Things (IoT), and their aggregated data will have implications for privacy. 20th century “dumb lenses” are already being connected to systems that can pick out number plates or human faces but truly smart sensors could know almost anything about us, from what is in our fridge and on our grocery list, to where we are going and whom we will meet. It is this aggregated, networked aspect of smarter sensors that will be at the core of the first AI challenge for policy-makers. As they become discriminating enough to anticipate what we might do next, e.g. in order to offer us useful information ahead of time, they create an inadvertent panopticon that the unscrupulous and actively criminal can exploit.

Moving past this challenge, AI will become able to support and enhance human decision-making. Human input will still be essential but it might be as limited as a “go/no go” on an AI-generated proposal. From a legal perspective, mens rea or scope of liability might not be wholly thrown into confusion, as a human decision-maker remains. Narrow applications in certain highly technical areas, e.g. medicine or engineering, might be practical but day-to-day users could be flummoxed if every choice had unreadable but legally essential Terms & Conditions. The policy-making response may be to use tort/liability law, obligatory insurance for AI providers/users, or new risk management systems to hedge the downside of AI-enhanced decision-making without losing the full utility of the technology.

Once decision-making is possible without human input, we begin to enter the realm of speculation.  However, it is important to remember that there are already high-frequency trading (HFT) systems in financial markets that operate independent of direct human oversight, following algorithmic instructions. The suggested linkages between “flash crash” events and HFT highlight, nonetheless, the problems policy-makers and regulators will face. It may be hard to foresee what even a “limited” AI might do in certain circumstances, and the ex-ante legal liability controls mentioned above may seem insufficient to policy-makers should a system get out of control, either in the narrow sense of being out of the control of those people legally responsible for it, or in the general sense of it being out of control of anybody.

These three stages would suggest significant challenges for policy-makers, with existing legal processes losing their applicability as AI moves further away from direct human responsibility. The law is, however adaptable, and solutions could emerge. In extremis we might, for example, be willing to add to the concept of “corporate persons” with a concept of “artificial persons”. Would any of us feel safer if we could assign legal liability to the AIs themselves and then sue them as we do corporations and businesses? Maybe.

In summary then, the true challenges for AI’s development may not exist solely in the big ticket moments of beating chess masters or passing Turing Tests. Instead, there will be any number of roadblocks caused by the needs of regulatory and policy processes systems still rooted in the 19th and 20th centuries. And, odd though this may sound from a technologist like me, that delay might be a good thing, given the potential transformative power of AI.

 

from Paul Nicholas

4 steps to managing shadow IT

Shadow IT is on the rise. More than 80 percent of employees report using apps that weren’t sanctioned by IT. Shadow IT includes any unapproved hardware or software, but SaaS is the primary cause in its rapid rise. Today, attempting to block it is an outdated, ineffective approach. Employees find ways around IT controls.

How can you empower your employees and still maintain visibility and protection? Here are four steps to help you manage SaaS apps and shadow IT.

Step 1: Find out what people are actually using

The first step is to get a detailed picture of how employees use the cloud. Which applications are they using? What data is uploaded and downloaded? Who are the top users? Is a particular app too risky? These insights provide information that can help you develop a strategy for cloud app use in your organization, as well as indicate whether an account has been compromised or a worker is taking unauthorized actions.

Step 2: Control data through granular policies

Once you have comprehensive visibility and understanding of the apps your organization uses, you can begin to monitor users’ activities and implement custom policies tailored to your organization’s security needs. Policies like restricting certain data types or alerts for unexpectedly high rates of an activity. You can take actions when there are violations against your policy. For instance, you can take a public link and make it private or create a user quarantine.

Step 3: Protect your data at the file level

Protecting data at the file level is especially important when data is accessed via unknown applications. Data loss prevention (DLP) policies can help ensure that employees don’t accidentally send sensitive information, such as personally identifiable information (PII) data, credit card numbers, and financial results outside of your corporate network. Today, there are solutions that help make that even easier.

Step 4: Use behavioral analytics to protect apps and data

Through machine learning and behavioral analytics, innovative threat detection technologies analyze how each user interacts with the SaaS applications and assess the risks through deep analysis. This helps you to identify anomalies that may indicate a data breach, such as simultaneous logons from two countries, the sudden download of terabytes of data, or multiple failed-logon attempts that may signify a brute force attack.

Where can you start?

Consider a Cloud Access Security Broker (CASB). These solutions are designed to help you achieve each of these steps in a simple, manageable way. They provide deeper visibility, comprehensive controls, and improved protection for the cloud applications your employees use—sanctioned or unsanctioned.

To learn why CASBs are becoming a necessity, read our new e-book. It outlines the common issues surrounding shadow IT and how a CASB can be a helpful tool in your enterprise security strategy.

Read Bring Shadow IT into the Light.

 

from Microsoft Secure Blog Staff