• Solutions
  • Services
  • Team
  • Blog
  • About
Get More Info

Author: Aman

Hi I'm Aman! I can help you with your Infosec needs!

Identity Services Engine

Posted on January 12, 2020January 12, 2020 by Aman

Cisco Identity Services Engine (ISE) is a powerful IAM and policy-based network access solution. ISE allows you to define role and policy-based access controls using secure communications protocols. ISE also features powerful dashboards and reports to give you realtime visibility into network authentication events. With ISE, you can simplify and unify all access into your […]

Cisco Identity Services Engine (ISE) is a powerful IAM and policy-based network access solution. ISE allows you to define role and policy-based access controls using secure communications protocols. ISE also features powerful dashboards and reports to give you realtime visibility into network authentication events. With ISE, you can simplify and unify all access into your network and services including LAN, wireless, VPN and guest. ISE also supports multi-vendor ecosystem in which ISE provides and sends data to other tools and services within your environment to provide unparalleled visibility.

Automating DNS to IP update for ISE DACLs

Posted on January 2, 2020January 3, 2020 by Aman

In this blog, we walk through the implementation of a Python script that checks if the IP for a DNS A record has changed, then automatically updates the ISE DACL. Understand the ISE API As with any other automation, understanding the flow and data source requirements is useful.  In the attached diagram, I document the […]

In this blog, we walk through the implementation of a Python script that checks if the IP for a DNS A record has changed, then automatically updates the ISE DACL.

Understand the ISE API

As with any other automation, understanding the flow and data source requirements is useful.  In the attached diagram, I document the dataflow as well as important aspects of the scripts before writing it.  Reading through Cisco’s ISE ERS API documentation is extremely useful here.  The documentation includes enough information to get started, however,  involves some trial and error.  The “Update” PUT method allows you to update the DACL, and to my pleasure, it completely replaces the old DACL with a new one instead of simply appending lines to it. This makes life easier, as it simplifies regex substitution and insertion operations.  Now that we understand the specific call we need to make, lets review other aspects of the API call.  Some request fields are ISE version specific, such as the “ERS-Media-Type” header field. This field isn’t mandatory, however, it varies with ISE versions.

ISE "Update" API call

In order to get the DACL ID, you’ll first need to make a “Get-All” GET request. This will return all ACL IDs as well as their names in order to make identification easier. The PUT request is just a tad tricky and I’ll show you how to send the DACL in that request.

Get the right libraries

To make life easier, you’ll need a couple of libraries. Python’s “re”, “requests”, “json”, and “dns.resolver” libraries are needed in order to create requests, parse the JSON and finally, search through the dacl using “re”.  The “dns.resolver” library parses the IP address of the A record easily.

First Expressions Count

One of the more challenging aspects of updating a string such as the DACL is finding the item to replace.  I use remarks in the DACL to divide the larger piece into groups of IPs based on the application or server.  I use “start” and “end” in the remarks to accomplish this. For example, “remark start app1.site.com” and “remark end app1.site.com” to identify a set of IPs associated with app1.site.com.  I use regex101.com for helping parse and create expressions. The site can create expressions for a specific language which is extremely useful.  An example of an expression is r”(?<=remark start app1\.site\.com\n)([\s\S]+?)(?=remark\send\sapp1\.site\.com)”.

Wrapping it up

The great thing about dns.resolver is that it provides a class-based approach to retrieving the IP portion of an “A” record.  This saves us from using regex to search for the IP.  To start with, we search and isolate the group of IPs associated with the application.  Then, we find and replace those IPs with the ones returned by dns.resolver.  Finally, we use the UPDATE API to PUT the newly updated DACL into ISE.  The DACL goes in the payload and not the header section of the HTTP PUT, in another article, we’ll look at how we can use Postman to make our lives easier when testing out ISE APIs.

Data classification strategies for 2020

Posted on December 31, 2019January 3, 2020 by Aman

Data is everywhere within the organization, but few people know the significance of their data to the business. In this post, I discuss strategies for performing data classification.  What makes data classification complex is the sheer number of items that need classification.  Overall, the classification process should be outsourced to the document owners, this is […]

Data is everywhere within the organization, but few people know the significance of their data to the business. In this post, I discuss strategies for performing data classification.  What makes data classification complex is the sheer number of items that need classification.  Overall, the classification process should be outsourced to the document owners, this is the best approach and a strategy needs to be identified early. The information security team’s role here is to identify the data repository’s data classification capability.  Not all repositories have an easy and intuitive classification method. Let’s take a look at some possible approaches.

Identify current state

Data classification is a process by which information security and legal teams identify critical and sensitive organizational data.  In order to succeed in this endeavor, you’ll need to bite off small chunks and, work with patience and determination.  If you’re late to the game, start by identifying your repositories built-in capability to classify data.  Investigate whether you’re able to simply turn on the feature within the application. It might be offered in an enhanced version of the product that requires additional licensing or costs however data classification is important and the additional cost will be well worth it.

A data classification guide should be provided to all document owners. The classification guide should be simple to follow and understand. A typical classification system includes 3 classification levels including sensitive, secret, and unclassified, however, additional classification types might be useful depending on your specific business needs.  To begin with, classify your data directly in the document. This classification should be searchable and readable by search and regular expression engines.  This allows for a DLP solution to search the document to identify its classification level.

Fill in the gaps

Enterprise document storage solutions should have robust scripting and plugin capabilities. In-house development can help create data-classification add-ons and plugins for any document management solution that does not provide such capability.  Some collaboration suites include a marketplace that readily offers such extras, however, investigate the add-on for security issues before implementing.

Crowdsource your classification efforts

Off-course, this isn’t always possible and in that case, setting up a meeting with the document owners and walking them through the importance of the data classification program will be required.  Users are more willing to participate in mundane yet important tasks if the process is more engaging and rewarding. Identify ways in which you could gamify the process and provide rewards, provide a leaderboard and acknowledge those at the top during company meetings. Data classification is a critical process, therefore, participation should be part of everyone’s job responsibilities. Executive leadership needs to buy in and ensure that data classification is an essential part of everyone’s routine and not a burden on employee workload.

Improve your vuln management program in 2020

Posted on December 31, 2019January 3, 2020 by Aman

When it comes to vulnerability management, you can get the first 20% up and running without many issues. Deploying a management panel, adding sensors, running discovery scans. Most of these are fairly intuitive tasks and a few platforms will have wizard-like interfaces that will help you get to the 20% mark within a few hours. […]

When it comes to vulnerability management, you can get the first 20% up and running without many issues. Deploying a management panel, adding sensors, running discovery scans. Most of these are fairly intuitive tasks and a few platforms will have wizard-like interfaces that will help you get to the 20% mark within a few hours. However, to get the most out of your VMS (Vulnerability Management System), you’ll need to work within and outside the security team, with a mixture of stakeholders, to ensure your efforts are worthwhile and that the vulnerabilities that are detected, are handled quickly and efficiently.

Whatever your ticketing and task management platform, it’s important that vulnerability management solutions do not operate in a vacuum. It isn’t necessarily true or scalable that your VMS operator should be the person that also fix or patch the vulnerable system, or be responsible for identifying who the vulnerable system belongs to. In large enterprises, or even small ones, identifying systems owners can be challenging. Who owns the operating system? Are they the ones responsible for patching firmware? Who owns the web-server? Are they the ones that own the application?

Each system can have varying levels of complexity as well as a myriad of owners that own different aspects of the affected system. Documenting who owns the system is important, however, the scans should be setup in such a way that scanned assets also include asset identifying metadata that can reference ownership of the system or platform. It’s a good practice to include this information in a summary or description field of the scan so its handy for automation.

Once ample metadata is provided, automation can be used to create and assign high severity vulnerabilities to the appropriate owner. This ensures that it gets handled in a timely manner and that the vulnerability is not stuck within the confines of the VMS itself. Whether the ticketing system is Zendesk, ServiceNow or Jira, each platform can be configured via scripting and API services to create and assign tickets with the appropriate severity rating.

Off-course, in order to get the highest fidelity scans possible, the scanning engine should get accurate information about the system that is being scanned. VMS systems should be configured with either certificate based or other forms of authentication in order to log into the OS or application in order to accurately assess that system for vulnerabilities. Unauthenticated scans are only good for asset management but not for vulnerability management. Authenticated scans increase the signal to noise ratio and help identify critical vulnerabilities and assign the appropriate SLA.

Ensure that your VMS system has only the minimum credential level required to perform the appropriate scanning. This can be accomplished with either a sudoers file or setting appropriate privilege levels for the commands required. LDAP and Hashicorps vault are good examples of centralized authentication systems.  It is a common and recommended practice to rotate credentials on a set interval and log all activities performed by the system. This allows quick detection and prevention of unintended actions.

Last but not least, establish realistic SLAs and MTRs (mean time to remediation) and track these metrics, giving the power back to the system owners to incorporate their own patching strategies for their respective systems. The patching strategy should automatically alert the Scan engineer that a remediation task has been performed. It should allow the ticketing platform to kick off another scan task to validate the fix by calling the VMS system via the API.

New Team Members

Posted on December 30, 2019 by Aman

Something great

Something great

Welcome to my new site

Posted on March 17, 2019March 17, 2019 by Aman

Hey this is an excerpt.

Hi everyone, hope you love it!

Hi Im Aman!

Recent Posts

  • Identity Services Engine
  • Automating DNS to IP update for ISE DACLs
  • Data classification strategies for 2020
  • Improve your vuln management program in 2020
  • New Team Members

Recent Comments

    Archives

    • January 2020
    • December 2019
    • March 2019

    Categories

    • Featured
    • New Posts
    • Service
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Services
    • Design
    • Anti-Malware
    • Firewall
    • ISE
    Site Nav
    • Design
    • Anti-Malware
    • Firewall
    • ISE
    About Us
    • Solutions
    • Services
    • Team
    • Blog
    • About
    Connect
    • Facebook
    • Twitter
    • LinkedIn
    Sign Up For Our Newsletter
    Copyright© 2020 Arrowhead Information Security | San Jose, CA
    • Solutions
    • Services
    • Team
    • Blog
    • About