Menu

Articles

NewsArchives

Why organizations need Security Operation Center (SOC)?

  • 26 January 2016
  • Article Rating
Why organizations need Security Operation Center (SOC)?

Data Explosion

The multitude of devices, users, and generated traffic all combine to create a proliferation of data that is being created with incredible volume, velocity, and variety. As a result, organizations need a way to protect, utilize, and gain real-time insight from “big data.” This intelligence is not only valuable to businesses and consumers, but also to hackers. Robust information marketplaces have arisen for hackers to sell credit card information, account usernames, passwords, national secrets (WikiLeaks), as well as intellectual property. How does anyone keep secrets anymore? How does anyone keep secrets protected from hackers?

    In the past when the network infrastructure was straightforward and perimeters used to exist, controlling access to data was much simpler. If your secrets rested within the company network, all you had to do to keep the data safe was to make sure you had a strong firewall in place. However, as data became available through the Internet, mobile devices, and the cloud having a firewall was not enough. Companies tried to solve each security problem in a piecemeal manner, tacking on more security devices like patching a hole in the wall. But, because these products did not interoperate, you could not coordinate a defense against hackers.


     In order to meet the current security problems faced by organizations, a new paradigm shift needs to occur. Businesses need the ability to secure data, collect it, and aggregate into an intelligent format, so that real-time alerting and reporting can take place. The first step is to establish complete visibility so that your data and who accesses the data can be monitored. Next, you need to understand the context, so that you can focus on the valued assets, which are critical to your business. Finally, utilize the intelligence gathered so that you can harden your attack surface and stop attacks before the data is exfiltrated. So, how do we get started?

Data Collection

    Your first job is to aggregate all the information from every device into one place. This means collecting information from cloud, virtual, and real appliances: network devices, applications, servers, databases, desktops, and security devices. With

Software-as-a-Service (SaaS) applications deployed in the cloud, it is important to collect logs from those applications as well since data stored in the cloud can contain information spanning from human resource management to customer information. Collecting this information gives you visibility into who is accessing your company’s information, what information they are accessing, and when this access is occurring. The goal is to capture usage patterns and look for signs of malicious behavior.

    Typically, data theft is done in five stages 1. First, hackers “research” their target in order to find a way to enter the network. After “infiltrating” the network, they may install an agent to lie dormant and gather information until they “discover” where the payload is hosted, and how to “acquire” it. Once the target is captured, the next step is to “exfiltrate” the information out of the network. Most advanced attacks progress through these five stages, and having this understanding helps you look for clues on whether an attack is taking place in your environment, and how to stop the attacker from reaching their target. The key to determining what logs to collect are to focus on records where an actor is accessing information or systems.

Data Integration

    Once the machine data is collected, the data needs to be parsed to derive intelligence from cryptic log messages. Automation and rule-based processing is needed because having a person review logs manually would make the problem of finding an attacker quite difficult since the security analyst would need to manually separate attacks from logs of normal behavior. The solution is to normalize machine logs so that queries can pull context-aware information from log data. For example, ARMA SIEM’s Main Sensor normalizes and categorizes log data into over many meta fields. Logs that have been normalized become more useful because you no longer need an expert on a particular device to interpret the log. By enriching logs with metadata, you can turn strings of text into information that can be indexed and searched.

Data Analytics

    Normalized logs are indexed and categorized to make it easy for a correlation engine to process and identify patterns based on heuristics and security rules. It is here where the art of combining logs from multiple sources and correlating events together help to create real-time alerts. This preprocessing also speeds up correlation and makes vendor-agnostic event logs, which give analysts the ability to build reports and filters with simple queries.

SOC (Security Operation Centre)

    A SOC is composed of trained and competent experts, operational procedures, and technical infrastructures involved in providing situational awareness through the detection, containment, and remediation of IT threats. A SOC detects and manages security incidents for the enterprise, ensuring they are properly identified, analysed, communicated, actioned/defended, investigated and reported. The SOC also monitors applications to identify a possible cyber-attack or intrusion and determine if it is a real, malicious threat, and if it could have an impact. SOCs typically are based on a SIEM (Security Information and Event Management) system which aggregates and correlates data from security feeds such as network assets.


تعداد امتيازات :

ارسال نظر جديد

Name (required)

Email (required)

Website

نظرات ارسال شده

هم اکنون هيچ نظري ارسال نشده است. شما مي توانيد اولين نظردهنده باشد.