TROOPERS 2017 Day #3 Wrap-Up

The third day is already over! Today the regular talks were scheduled split in three tracks: offensive, defensive and a specific one dedicated to SAP. The first slot at 09:00 was, as usual, a keynote. Enno Rey presented ten years of TROOPERS. What happened during all those editions? The main ideas behind TROOPERS have always been that everybody must learn something by attending the conference but… with fun and many interactions with other peers! The goal was to mix infosec people coming from different horizons. And, of course, to use the stuff learned to contribute back to the community. Things changed a lot during these ten years, some are better while others remain the same (or worse?). Enno reviewed all the keynotes presented and, for each of them, gave some comments – sometimes funny. The conference in itself also evolved with a SAP track, the Telco Sec Day, the NGI track and when they move to Heidelberg. Some famous vulnerabilities were covered like MS08-067 or the RSA hack. What we’ve seen:

  • A move from theory to practice
  • Some things/crap that stay the same (same shit, different day)
  • A growing importance of the socio-economic context around security.

Has progress been made? Enno reviewed infosec in three dimensions:

  • As a (scientific) discipline: From theory to practice. So yes, progress has been made
  • In enterprise environments: Some issues on endpoints have been fixed but there is a fact: Windows security has become much better but now they use Android :). Security in Datacenter also improved but now there is the cloud. 🙂
  • As a constituent for our society: Complexity is ever growing.

Are automated systems the solution? They are still technical and human factors that are important “Errare Humanum Est” said Enno. Information security is still in progress but we have to work for it. Again, the examples of the IoT crap was used. Education is key. So, yes, the TROOPERS motto is still valid: “Make the world a better place”. Based on the applause from the audience, this was a great keynote by an affected Enno!

I started my day within the defensive track. Veronica Valeros presented “Hunting Them All”. Why do we need hunting capabilities? A definition of threat hunting is “to help in spotting attacks that would pass our existing controls and make more damages to the business“.

Veronica's Daily Job: Hunting!

People are constantly hit by threats (spam, phishing, malware, trojans, RATs, … you name them). Being always online also increases our surface attack. Attacks are very lucrative and attract a lot of bad guys. Sometimes, malware may change things. A good example comes with the ransomware plague: it made people aware that backups are critical. Threat hunting is not easy because when you are sitting on your network, you don’t always know what to search. And malicious activity does not always rely on top-notch technologies. Attackers are not all ‘l33t’. They just want to bypass controls and make their malicious code run. To achieve this, they have a lot of time, they abuse the weakest link and they hide in plain sight. To resume: they use the “less effort rule”. Which sound legit, right? Veronica has access to a lot of data. Her team is performing hunting across hundreds of networks, millions of users and billions of web requests. How to process this? Machine learning came to the rescue. And Veronica’s job is to check and validate the output of the machine learning process they developed. But it’s not a magic tool that will solve all issues. The focus must be given on what’s important: from 10B of requests/day to 20K incidents/day using anomaly detection, trust modelling, event classification, entity & user modelling. Veronica gave an example. The botnet Sality is active since 2003 and still present. IOC’s exists but they generate a lot of false positives. Regular expressions are not flexible enough. Can we create algorithms to automatically track malicious behaviour. For some threats, it works, for others no. Veronica’s team is tracking +200 malicious behaviours and 60% is automated tracking. “Let the machine do the machine work”. As a good example, Veronica explained how referrers can be the source of important data leaks from corporate networks.

My next choice was “Securing Network Automation” by Ivan Peplnjak. In a previous talk, Ivan explained why Software Defined Networks failed but many vendors improved, which is good. So today, his new topic was about ways to improve the automation from a security perspective. Indeed, we must automate as much as possible but how to make it reliable and secure? If a process is well defined, it can be automated as said Ivan. Why automate? From a management perspective, the same reasons come always on the table: increase the flexibility while reducing costs, to have faster deployments and complete for public cloud offering. About the cloud, do we need to buy or to build? In all cases, you’ll have to build if you want to automate. The real challenge is to move quickly from development to test and production. To achieve this, instead of editing a device configuration live, create configuration text files, push them to a gitlab server. Then you can virtualise a lab, pull config and test them. Did it work? Then merge with the main branch. A lot can be automated: device provisioning, VLANs management, ACLs, firewall rules. But the challenge is to have strong controls to prevent issues upfront and troubleshoot if needed. A nice quote was:

“To make mistake is human, to automatically deploy mistake to all the servers use DevOps”

You remember the amazon bad story? Be prepared to face issues. To automate, you need tools and such tool must be secure. An example was given with Ansible. The issues are that it gathers information from untrusted source:

  • Scripts are executed on managed devices: what about data injection?
  • Custom scripts are included in data gathering: More data injection?
  • Returned data are not properly parsed: Risk of privilege escalation?

The usual controls to put in place are:

  • OOB management
  • Management network / VR
  • Limit access to the management hosts
  • SSH-based access
  • Use SSH keys
  • RBAC (commit scripts)

Keep in mind: Your network is critical so automatic (network programming) is too. Don’t write code yourself (hire a skilled Python programmer for this task) but you must know what the code should do. Test, test, test and once done, test again. As an example of control, you can perform a trace route before / after the change and compare the path. Ivan published a nice list of requirements for your vendor while looking for a new network device. If your current vendor cannot provide you basic requirements like an API, change it!

After the lunch, back to the defence & management track with “Vox Ex Machina” by Grame Neilson. The title looked interesting, was it more offensive of defensive content? Voice recognition is more and more used (example: Cortana, Siri, etc) but also on non-IT systems like banking or support system: “Press 1 for X or press 2 for Y”. But is it secure? Voice recognition is not a new hipe. There are references to the “Voder” already in 1939. Another system was the Vocoder a few years later. Voice recognition is based on two methods: phrase dependent or independent (the current talk will focus on the first method). The process is split in three phases:

  • Enrolment: your record a phrase x times. It must be different and the analysis is stored as a voice print.
  • Authentication: Based on feature extraction or MFCC (Mel-Frequency Cepstrum Correlation).
  • Confidence: Returned as a percentage.

The next part of the talk focused on the tool developed by Grame. Written in Python, it tests a remote API. The different supported attacks are: replay, brute-force and voice print fixation. An important remark made by Grame: Event if some services pretend it, your voice is NOT a key! Every time you pronounce “word”, the generated file is different. That’s why the process of brute-forcing is completely different with voice recognition: You know when you are getting closer due to the returned confidence  (in %) instead of a password comparison which returns “0” or “1”. The tool developed by Grame is available here (or will be soon after the conference).

The next talk was presented by Matt Grabber and Casey Smith: “Architecting a Modern Defense using Device Guard”. The talk was scheduled on the defensive track but it covered both worlds. The question that interest many people is: Is whitelisting a good solution? Bad guys are trying to find bypass strategies (red teams). What are the mitigations available for the blue teams? The attacker’s goal is clear: execute HIS code on YOUR computer. They are two types of attackers: the one who knows what controls you have in place (enlightened) and the novices who aren’t equipped to handle your controls (ex: the massive phishing campaigns dropping Office documents with malicious macros). Device Guard offers the following protections:

  • Prevents unauthorised code execution,
  • Restricted scripting environment
  • Prevents policy tempering and virtualisation based security

The speakers were honest: Device Guard does NOT protect against all the threats but it increases the noises (evidence). Bypasses are possible. how?

  • Policy misconfiguration
  • Misplaced trust
  • Enlightened scripting environments
  • Exploitation of vulnerable code
  • Implementation flaws

The way you deploy your policy depends on your environment is a key but also depends on the security eco-system where we are living. Would you trust all code signed by Google? Probably yes. Do you trust any certificate issued by Symantec? Probably not. The next part o the talk was a review of the different bypass techniques (offensive) and them some countermeasures (defensive). A nice demo was performed with Powershell to bypass the language constraint mode. Keep in mind that some allowed applications might be vulnerable. Do you remember the VirtualBox signed driver vulnerability? Besides those problems, Device Guard offers many advantages:

  • Uncomplicated deployment
  • DLL enforcement implicit
  • Supported across windows ecosystem
  • Core system component
  • Powershell integration

Conclusion: whitelisting is often a huge debate (pro/con). Despite the flaws, it forces the adversaries to reset their tactics. By doing this you disrupt the attackers’ economics: if it makes the system harder to compromise, it will cost the more time/money.

After the afternoon coffee break, I switched to the offensive track again to follow Florian Grunow and Niklaus Schuss who presented “Exploring North Korea’s Surveillance Technology”. I had no idea about the content of the talk but it was really interesting and an eye-opener! It’s a fact:  If it’s locked down, it must be interesting. That’s why Florian and Niklaus performed a research on the systems provided to DPRK citizens (“Democratic People’s Republic of Korea“). The research was based on papers published by others and devices / operating systems leaked. They never went over there. The motivation behind the research was to get a clear view of the surveillance and censorship put in place by the government. It started with the Linux distribution called “Red Star OS”. It is based on Fedora/KDE via multiple version and looks like a modern Linux distribution but… First finding: certificates installed in the browser are all coming from the Korean authorities. Also, some suspicious processes cannot be killed. Integrity checks are performed on system files and downloaded files are changed on the fly by the OS (example: files transferred via an USB storage). The OS adds a watermark at the end of the file which helps to identify the computer which was used. If the file is transferred to another computer, a second watermark is added, etc. This is a nice method to track dissidents and to build a graph of relations between them. Note that this watermark is added only on data files and that it can easily be removed. An antivirus is installed but can also be used to deleted files based on their hash. Of course, the AV update servers are maintained by the government. After the desktop OS, the speakers reviewed some “features” installed on the “Woolim” tablet. This device is based on Android and does not have any connectivity onboard. You must use specific USB dongle for this (provided by the government of course). When you try to open some files, you get a warning message “This is not signed file”. Indeed, the tablet can only work with files signed by the government or locally (based on RSA signatures). The goal, here again, is to prevent the distribution of media files. From a network perspective, there is no direct Internet access and all the traffic is routed through proxies. An interesting application running on the tablet is called “TraceViewer”. It takes a screenshot of the tablet at regular interval. The user cannot delete the screenshots and random physical controls can be performed by authorities to keep the pressure on the citizens. This talk was really an eye-opener for me. Really crazy stuff!

Finally, my last choice was another defensive track: “Arming Small Security Programs” by Matthew Domko. The idea is to generate a network baseline, exactly like we do for applications on Windows. For many organizations, the problem is to detect malicious activity on your network. Using an IDS becomes quickly unuseful due to the amount and the limitation of signatures. Matthew’s idea was to:

  • Build a baseline (all IPs, all ports)
  • Write snort rules
  • Monitor
  • Profit

To achieve this, he used the tool Bro. Bro is some kind of Swiss army knife for IDS environments. Matthew made a quick introduction to the tool and, more precisely, focussed on the scripting capabilities of Bro. Logs produced by Bro are also easy to parse. The tool developed by Matthew implements a simple baseline script. It collects all connections to IP addresses / ports and logs what is NOT know. The tool is called Bropy and should be available soon after the conference. A nice demo was performed. I really liked the idea behind this tool but it should be improved and features added to be used on big environments. I would recommend having a look at it if you need to build a network activity baseline!

The day ended with the classic social event. Local food, drinks and nice conversations with friends, which is priceless. I have to apologize for the delay to publish this wrap-up. Complaints can be sent to Sn0rkY! 😉

[The post TROOPERS 2017 Day #3 Wrap-Up has been first published on /dev/random]

from Xavier

“Time for Password Expiration to Die”

Editor’s Note: This is based on a post I did to the SANS GIAC maillist. I’ve been meaning to blog about password expirationsand this was the kick in the butt I needed. This is also the perfect example of the saying – “amateurs mitigate risk, professionals manage risk .” Per Thorsheim, Cormac Herley, I and … Continue reading Time for Password Expiration to Die

from lspitzner

A new best practice to protect technology supply chain integrity

This post is authored by Mark Estberg, Senior Director, Trustworthy Computing. 

The success of digital transformation ultimately relies on trust in the security and integrity of information and communications technology (ICT). As ICT systems become more critical to economic prosperity, governments and organizations around the world are increasingly concerned about threats to the technology supply chain. These concerns stem from fear that an adversary might tamper with or manipulate products during development, manufacture, or delivery. This poses a challenge to the technology industry: If our products are to be fully trusted, we must be able to provide assurance to our customers that the technology they reviewed and approved before deployment is the same software that is running on their computers.

To increase confidence, organizations have increasingly turned to source code analysis through direct inspection of the supply chain by a human expert or an automated tool. Source code is a set of computer instructions written in a programming language that humans can read. This code is converted (or compiled) into a binary file of instructions—a language of zeroes and ones that machines can process and execute, or executable. This conversion of human-readable code to machine-readable code, however, raises the unsettling question of whether the machine code—and ultimately the software program running on computers—was built from the same source code files that the expert or tool analyzed. There has been no efficient and reliable method to answer this, even for open source software. Until now.

At Microsoft, we have developed a way to definitively demonstrate that a compiled machine-readable executable was generated from the same human-readable source code that was reviewed. It’s based on the concept of a “birth certificate” for binary files, which consists of unique numbers (or hash values) that are cryptographically strong enough to identify individual source code files.

As source code is compiled in Visual Studio, the compiler assigns the source code a hash value generated in such a way that it is virtually impossible that any other code will produce the same hash value. By matching hash values from the compiler to those generated from the examined source code files, we can verify that the executable code did indeed result from the original source code files.

This method is described in more detail in Hashing Source Code Files with Visual Studio to Assure File Integrity. The paper gives a full description of the new Visual Studio switch for choosing a hashing algorithm, suggested scenarios where such hashes might prove useful, and how to use Visual Studio to generate these source code hashes.

Microsoft believes that the technology industry must do more to assure its stakeholders of the integrity of software and the digital supply chain. Our work on hashing is both a way to help our customers and a way to further how the industry is addressing this growing problem:

  • This source file hashing can be employed when building C, C++, and C# executable programs in Visual Studio.
  • Technology providers can use unique hash value identifiers in their own software development for tracking, processing, and controlling source code files that definitively demonstrate a strong linkage to the specific executable files.
  • Standards organizations can include in their best practices the requirement to take this very specific and powerful step toward authenticity.

We believe that capabilities such as binary source file hashing are necessary to establish adequate trust to fulfill the potential of digital transformation. Microsoft is committed to building trust in the technology supply chain and will continue to innovate with our customers, partners and other industry stakeholders.

Practical applications of digital birth certificates

There are many practical applications for our binary source file hashing capability, including these:

  • Greater assurance through automated scanning. As an automated analysis tool scans the source code files, it can also generate a hash value for each of the files being scanned. Matching hash values from the compiler with hash values generated by the analysis not only definitively demonstrates that they were compiled into the executable code, but that the source code files were scanned with the approved tool.
  • Improved efficiency in identifying vulnerabilities. If a vulnerability is identified in a source file, the hash value of the source file can be used to search among the birth certificates of all the executable programs to identify programs likely to include the same vulnerability.

To learn more about evolving threats to the ICT supply chain, best practices, and Microsoft’s strategy, check out our webinar, Supply Chain Security: A Framework for Managing Risk.

from Microsoft Secure Blog Staff

TROOPERS 2017 Day #2 Wrap-Up

This is my wrap-up for the 2nd day of “NGI” at TROOPERS. My first choice for today was “Authenticate like a boss” by Pete Herzog. This talk was less technical than expected but interesting. It focussed on a complex problem: Identification. It’s not only relevant for users but for anything (a file, an IP address, an application, …). Pete started by providing a definition. Authentication is based on identification and authorisation. But identification can be easy to fake. A classic example is the hijacking of a domain name by sending a fax with a fake ID to the registrar – yes, some of them are still using fax machines! Identification is used at any time to ensure the identity of somebody to give access to something. It’s not only based on credentials or a certificate.

Identification is extremely important. You have to distinguish the good and bad at any time. Not only people but files, IOC’s, threat intelligence actors, etc. For files, metadata can help to identify. Another example reported by Pete: the attribution of an attack. We cannot be 100% confident about the person or the group behind the attack.The next generation Internet needs more and more identification. Especially with all those IoT devices deployed everywhere. We don’t even know what the device is doing. Often, the identification process is not successful. How many times did you send a “hello” to somebody that was not the right person on the street or while driving? Why? Because we (as well as objects) are changing. We are getting older, wearing glasses, etc…  Every interaction you have in a process increases your attack surface the same amount as one vulnerability.  What is more secure? Let a user choose his password or generate a strong one for him? He’ll not remember ours and write it down somewhere. In the same way, what’s best? a password or a certificate? An important concept explained by Pete is the “intent”. The problem is to have a good idea of the intent (from 0 – none – to 100% – certain).

Example: If an attacker is filling your firewall state table, is it a DoS attack? If somebody is performed a traceroute to your IP addresses, is it a foot-printing? Can be a port scan automatically categorized as hunting? And a vulnerability scan will be immediately followed by an attempt to exploit? Not always… It’s difficult to predict specific action. To conclude, Pete mentioned machine learning as a tool that may help in the indicators of intent.

After an expected coffee break, I switched to the second track to follow “Introduction to Automotive ECU Research” by Dieter Spaar. ECU stands for “Electronic Control Unit”. It’s some kind of brain present in modern cars that helps to control the car behaviour and all its options. The idea of the research came after the problem that BMW faced with the unlock of their cars. Dieter’s Motivations were multiple: engine tuning, speedometer manipulation, ECU repair, information privacy (what data are stored by a car?), the “VW scandal” and eCall (Emergency calls). Sometimes, some features are just a question of ECU configuration. They are present but not activated. Also, from a privacy point of view, what infotainment systems collect from your paired phone? How much data is kept by your GPS? ECU’s depend on the car model and options. In the picture below, yellow  blocks are ECU activated, others (grey) are optional (this picture is taken from an Audi A3 schema):

Audi A3 ECU

Interaction with the ECU is performed via a bus. They are different bus systems: the most known is CAN (Controller Area Network), MOST (Media Oriented System Transport), Flexray, LIN (Local Interconnected Network), Ethernet or BroadR-Reach. Interesting fact, some BMW cars have an Ethernet port to speed up the upgrades of the infotainment (like GPS maps). Ethernet provides more bandwidth to upload big files. ECU hardware is based on some typical microcontrollers like Renesas, Freescale or Infineon. Infotainment systems are running on ARM sometimes x86. QNX, Linux or Android. A special requirement is to provide a fast response time after power on. Dieter showed a lot of pictures with ECU where you can easily identify main components (Radio, infotainment, telematics, etc). Many of them are manufactured by Peiker. This was a very quick introduction but this demonstrated that they are still space for plenty of research projects with cars. During the lunch break, I had an interesting chat with two people working at Audi. Security is clearly a hot topic for car manufacturers today!

For the next talk, I switched again to the other track and attended “PUF ’n’ Stuf” by Jacob Torrey & Anders Fogh. The idea behind this strange title was “Getting the most of the digital world through physical identities”. The title came from a US TV show popular in the 60’s. Today, within our ultra-connected digital world, we are moving our identity from a physical world and it becomes difficult to authenticated somebody. We are losing the “physical” aspect. Humans can quickly spot an imposter just by having a look at a picture and after a simple conversation. Even if you don’t personally know the person. But to authenticate people via a simple login/password pair, it becomes difficult in the digital world. The idea of Jacob & Anders was to bring a strong physical identification in the digital world. The concept is called “PUF” or “Physically Uncloneable Function“. To achieve this, they explained how to implement a challenge-response function for devices that should return responses as non-volatile as possible. This can be used to attest the execution state or generate device-specific data. They reviewed examples based on SRAM, EEPROM or CMOS/CCD. The latest example is interesting. The technique is called PRNU and can be used to uniquely identify image sensors. This is often used in forensic investigation to link a picture to a camera. You can see this PUF as a dual-factor authentication. But there are caveats like a lack of proper entropy or PUF spoofing. Interesting idea but no easy to implement in practical cases.

After the lunch, Stefan Kiese had a two-hours slot to present “The Hardware Striptease Club”. The idea of the presentation was to briefly introduce some components that we can find today in our smart houses and see how to break them from a physical point of view. Stefan briefly explained the methodology to approach those devices. When you do this, never forget the impact (loss of revenue, theft of credentials, etc… or worse life (pacemakers, cars). Some reviewed victims:

  • TP-Link NC250 (Smart home camera)
  • Netatmo weather station
  • BaseTech door camera
  • eQ-3 home control access point
  • Easy home wifi adapter
  • Netatmo Welcome

It made an electronic crash course but also insisted on the risks to play with electricity powered devices! Then, people were able to open and disassemble the devices to play with them.

I didn’t attend the second hour because another talk looked interesting: “Metasploit hardware bridge hacking” by Craig Smith. He is working at Rapid7 and is playing with all “moving” things from cars to drones. To interact with those devices, a lot of tools and gadgets are required. The idea was to extend the Metasploit framework to be able to pentest these new targets. With an estimation of 20.8 billions of IoT devices connected (source: Gartner), pentesting projects around IoT devices will be more and more frequent. Many tools are required to test IoT devices: RF Transmitters, USB fuzzers, RFID cloners, JTAG devices, CAN bus tools, etc. The philosophy behind Metasploit remains the same: based on modules (some exploits, some payload, some shellcodes). New modules are available to access relays which talk directly to the hardware module. Example:

msf> use auxililary/server/local_hwbridge

A Metasploit relay is a lightweight HTTP server that just makes JSON translations between the bridge and Metasploit.

Example: ELM327 diagnostic module can be used via serial USB or BT. Once connected all the classic framework features are available as usual:


Other supported relays are RF transmitter or Zigbee. This was an interesting presentation.

For the last time slot, there was two talks: one about vulnerabilities in TP-Link devices and one presented as “Looking through the web of pages to the Internet of Things“. I chose the second one presented by Gabriel Weaver. The abstract did not describe properly the topic (or I did not understand it) but the presentation was a review of the research performed by Gabriel: “CTPL” or “Cyber Physical Topology Language“.

That’s close the 2nd day. Tomorrow will be dedicated to the regular tracks. Stay tuned for more coverage.

[The post TROOPERS 2017 Day #2 Wrap-Up has been first published on /dev/random]

from Xavier

3 ways to outsmart attackers by using their own playbook

This blog post was authored by Andrej Budja, Frank Brinkmann, Heath Aubin, Jon Sabberton and Jörg Finkeisen from the Cybersecurity Protection Team, part of the Enterprise Cybersecurity Group.

The security landscape has changed.

Attackers often know more about the target network and all the ways they can compromise an organization than the targeted organization itself. As John Lambert writes in his blog, “Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win”.

Attackers do think in graphs. Unfortunately, most organizations still think in lists and apply defenses based on asset value, rather than the security relationships between the assets.

So, what can you do to level the playing field? Use the attackers’ playbook against them!

Get ahead by creating your own graph

Start by reading John Lambert’s blog post, then do what attackers do – graph your network. At Microsoft, we are using graphs to identify potential attack paths on our assets by visualizing key assets and security relationships.

While we have not published our internal tools (you can find some similar open source tools on the Internet), we have created a special cybersecurity engagement delivered by our global Microsoft Services team, called Active Directory Hardening (ADH).

The ADH offer uses our tools to help discover and analyze privileged account exposure and provide transition assistance for deviations from the privileged administration recommendations used at Microsoft. The ADH provides assistance by reducing the number of highly privileged Active Directory (AD) administrative accounts and transitioning them into a recommended AD administration model.

Break connections in your graph

Once you have the graph for your AD accounts, you will notice clusters as well as the different paths attackers can use to move laterally on your network. You will want to implement security controls to close those paths. One of the most effective ways to reduce the number of paths is by reducing the number of administrators (this includes users that are local administrators on their workstations) and by using dedicated, hardened workstations for all privileged users – we call these Privileged Access Workstations (PAWs).

These PAWs are deployed from a clean source and make use of modern security controls available in Windows 10. Because PAWs are not used as general purpose workstations (no email and Internet browsing allowed), they provide high security assurances for sensitive accounts and block popular attack techniques. PAWs are recommended for administration of identity systems, cloud services, and private cloud fabric as well as sensitive business functions.

You can develop and deploy PAWs on your own by following our online guide, or you can engage Microsoft Services to help accelerate your adoption of PAWs using our standard PAW offering.

Bolster your defenses

PAWs provide excellent protection for your privileged users. However, they are less effective when your highest privileged accounts (Domain Administrators and Enterprise Administrators) have already been compromised. In this situation, you need to provide Domain Administrators a new, clean, and trusted environment from which they can regain control of the compromised network.

Enhanced Security Administrative Environment (ESAE) builds upon guidance and security controls from PAWs and adds additional controls by hosting highly-privileged accounts and workstations in a dedicated administrative forest. This new, minimal AD forest provides stronger security controls that are not possible in the production environment with PAWs. These controls are used to protect your most privileged production domain accounts. For more information about the ESAE administrative forest and security concepts, please read ESAE Administrative Forest Design Approach.


“If you know your enemy and know yourself you need not to fear the results of hundreds of battles”, Sun Tzu, Chinese general, military strategist, 6th Century BCE.

Protecting your valuable assets against sophisticated adversaries is challenging, but it can be made easier by learning from attackers and using their playbook. Our teams are working daily on the latest cybersecurity challenges and sharing our knowledge and experience. Discover more information in the following resources:

About the Cybersecurity Protection Team

Microsoft invests more than a billion dollars each year to build security into our products and services. One of the investments is the global Enterprise Cybersecurity Group (ECG) which consists of cybersecurity experts helping organizations to confidently move to the cloud and modernize their enterprises.

The Cybersecurity Protection Team (CPT) is part of ECG, and is a global team of Cybersecurity Architects that develops, pilots, and maintains cybersecurity offerings that protect your critical assets. The team works closely with other Microsoft teams, product groups, and customers to develop guidance and services that help protect your assets.

from Microsoft Secure Blog Staff