CISSP-Study-Resources

CISSP Resources

View project on GitHub

Domain 6 Security Assessment and Testing

  • Security assessment and testing programs are an important mechanism for validating the on-going effectiveness of security controls
    • they include a variety of tools, such as vulnerability assessments, penetration tests, software testing, audits, and other control validation
  • Every org should have a security assessment and testing program defined and operational
  • Security assessments: comprehensive reviews of the security of a system, application, or other tested environment
    • during a security assessment, a trained information security professional performs a risk assessment that identifies vulnerabilities in the tested environment that may allow a compromise and makes recommendations for remediation, as needed
    • a security assessment includes the use of security testing tools, but go beyond scanning and manual penetration tests
    • the main work product of a security assessment is normally an assessment report addressed to management that contains the results of the assessment in nontechnical language and concludes with specific recommendations for improving the security of the tested environment
  • An organization’s audit strategy will depend on its size, industry, financial status and other factors
    • a small non-profit, a small private company and a small public company will have different requirements and goals for their audit strategies
    • the audit strategy should be assessed and tested regularly to ensure that the organization is not doing a disservice to itself with the current strategy
    • there are three types of audit strategies: internal, external, and third-party
  • Artifact: piece of evidence such as text, or a reference to a resource which is submitted in response to a question
  • Assessment: testing or evaluation of controls to understand which are implemented correctly, operating as intended and producing the desired outcome in meeting the security or privacy requirements of a system or org
  • Audit: process of reviewing a system for compliance against a standard or baseline (e.g. audit of security controls, baselines, financial records) can be formal and independent, or informal/internal
  • Chaos Engineering: discipline of experiments on a software system in production to build confidence in the system’s capabilities to withstand turbulent/unexpected conditions
  • Compliance Calendar: tracks an org’s audits, assessments, required filings, due dates and related
  • Compliance Tests: an evaluation that determines if an org’s controls are being applied according to management policies and procedures
  • Penetration Testing/Ethical Penentration Testing: security testing and assessment where testers actively attempt to circumvent/defaut a system’s security features; typically constrained by contracts to stay within specified Rules of Engagement (RoE)
  • Examination: process of reviewing/inspecting/observing/studying/analyzing specs/mechanisms/activities to understand, clarify, or obtain evidence
  • Findings: results created by the application of an assessment procedure
  • Judgement Sampling: AKA purposive or authoritative sampling, a non-probability sampling technique where members are chosen only on the basis of the researcher’s knowledge and judgement
  • Misue Case Testing: testing strategy from a hostile actor’s point of view, attempting to lead to integrity failures, malfunctions, or other security or safety compromises
  • Plan of Action and Milestones (POA&M): a document indentifying tasks to be accomplished, including details, resources, milestones, and completion target dates
  • RoE: Rules of Engagement, set of rules/constraints/boundaries that establish limits of participant activity; in ethical pen testing, an RoE defines the scope of testing, and to establish liabilty limits for both testers and the sponsoring org or system owners
  • Statistical Sampling: process of selecting subsets of examples from a population with the objective of estimating properties of the total population
  • Substantive Test: testing technique used by an auditor to obtain the audit evidence in order to support the auditor’s opinion
  • Testing: process of exersizing one or more assessment objects (activities or mechanisms) under specified conditions to compare actual to expected behaior
  • Trust Services Criteria (TSC): used by an auditor when evaluating the suitability of the design and operating effectiveness of controls relevant to the security, availabiliity, or processing integrity of information and systems or the confidentiality or privacy of the info processed by the entity

6.1 Design and validate assessment, test, and audit strategies (OSG-9 Chpt 15)

  • 6.1.1 Internal
    • An organization’s security staff can perform security tests and assessments, and the results are meant for internal use only, designed to evaluate controls with an eye toward finding potential improvements
    • An internal audit strategy should be aligned to the organization’s business and day-to-day operations
      • e.g. a publicly traded company will have a more rigorous internal auditing strategy than a privately held company
    • Designing the audit strategy should include laying out applicable regulatory requirements and compliance goals
    • Internal audits are performed by an organization’s internal audit staff and are typically intended for internal audiences
  • 6.1.2 External
    • An external audit strategy should complement the internal strategy, providing regular checks to ensure that procedures are being followed and the organization is meeting its compliance goals
    • External audits are performed by an outside auditing firm
      • these audits have a high degree of external validity because the auditors performing the assessment theoretically have no conflict of interest with the org itself
      • audits by these firms are generally considered acceptable by most investors and governing bodies
  • 6.1.3 Third-party
    • Third-party audits are conducted by, or on behalf of, another org
    • In the case of a third-party audit, the org initiating the audit generally selects the auditors and designs the scope of the audit
    • The statement on Standards for Attestation Engagements document 18 (SSAE 18), titled Reporting on Controls, provides a common standard to be used by auditors performing assessments of service orgs with the intent of allowing the org to conduct external assessments, instead of multiple third-party assessments, and then sharing the resulting report with customers and potential customers
      • outside of the US, similar engagements are conducted under the International Standard for Attestation Engagements (ISAE) 3402, Assurance Reports on Controls at a Service Organization
    • SSAE 18 and ISAE 3402 engagements are commonly referred to as a service organization controls (SOC) audits
    • Three forms of SOC audits:
      • SOC 1 Engagements: assess the organization’s controls that might impact the accuracy of financial reporting
      • SOC 2 Engagements: assess the organization’s controls that affect the security (confidentiality, integrity, and availability) and privacy of information stored in a system
        • SOC 2 audit results are confidential and are usually only shared outside an org under an NDA
      • SOC 3 Engagements: assess the organization’s controls that affect the security (confidentiality, integrity, and availability) and privacy information stored in a system
        • however, SOC3 audit results are intended for public disclosure
    • Two types of SOC reports:
      • Type I Reports: provide the auditor’s opinion on the description provided by management and the suitability of the design of the controls
        • type I reports also cover only a specific point in time, rather than an extended period
        • think of Type I report as more of a documentation review
      • Type II Reports: go further and also provide the auditor’s opinion on the operating effectiveness of the controls
        • the auditor actually confirms the controls are functioning properly
        • Type II reports also cover an extended period of time, at least 6 months
        • think of Type II report as similar to a traditional audit; the auditor is checking the paperwork, and verifying the controls are functioning properly
      • Type II reports are considered much more reliable than Type I reports (Type I reports simply take the service orgs word that the controls are implemented as described)

6.2 Conduct security control testing (OSG-9 Chpt 15)

  • Security control testing can include testing of the physical facility, logical systems and applications; common testing methods:
  • 6.2.1 Vulnerability assessment
    • Vulnerabilities: weaknesses in systems and security controls that might be exploited by a threat
    • Vulnerability assessments: examining systems for these weaknesses
    • The goal of a vulnerability assessment is to identify elements in an environment that are not adequately protected – and not necessarily from a technical perspective; you can also assess the vulnerability of physical security or the external reliance on power, for instance
      • can include personnel testing, physical testing, system and network testing, and other facilities tests
    • Vulnerability assessments are some of the most important testing tools in the information security professional’s toolkit
    • Security Content Automation Protocol (SCAP): provides a common framework for discussion and facilitation of automation of interactions between different security systems (sponsored by NIST)
      • SCAP components related to vulnerability assessments:
        • Common Vulnerabilities and Exposures (CVE): provides a naming system for describing security vulnerabilities
        • Common Vulnerability Scoring Systems (CVSS): provides a standardized scoring system for describing the severity of security vulnerabilities
        • Common Configuration Enumeration (CCE): provides a naming system for system config issues
        • Common Platform Enumeration (CPE): provides a naming system for operating systems, applications, and devices
        • Extensible Configuration Checklist Description Format (XCCDF): provides a language for specifying security checklists
        • Open Vulnerability and Assessment Language (OVAL): provides a language for describing security testing procedures
    • Vulnerability scans automatically probe systems, applications, and networks looking for weaknesses that could be exploited by an attacker
    • Four main categories of vulnerability scans:
      • network discovery scans
      • network vulnerability scans
      • web application vulnerability scans
      • database vulnerability scans
  • 6.2.2 Penetration testing
    • Penetration tests goes beyond vulnerability testing techniques because it actually attempts to exploit systems
    • NIST defines the penetration testing process as consisting of four phases:
    • planning: includes agreement on the scope of the test and the rules of engagement
      • ensures that both the testing team and management are in agreement about the nature of the test and that it is explicitly authorized
    • information gathering and discovery: uses manual and automated tools to collect information about the target environment
      • basic reconnaissance (website mapping)
      • network discovery
      • testers probe for system weaknesses using network, web and db vuln scans
    • attack: seeks to use manual and automated exploit tools to attempt to defeat system security
      • step where pen testing goes beyond vuln scanning as vuln scans don’t attempt to actually exploit detected vulns
    • reporting: summarizes the results of the pen testing and makes recommendations for improvements to system security
    • tests are normally categorized into three groups:
      • white-box penetration test:
        • provides the attackers with detailed information about the systems they target
        • this bypasses many of the reconnaissance steps that normally precede attacks, shortening the time of the attack and increasing the likelihood that it will find security flaws
        • these tests are sometimes called “known environment” tests
      • gray-box penetration test:
        • AKA partial knowledge tests, these are sometimes chosen to balance the advantages and disadvantages of white- and black-box penetration tests
        • this is particularly common when black-box results are desired but costs or time constraints mean that some knowledge is needed to complete the testing
        • these tests are sometimes called “partially known environment” tests
      • black-box penetration test:
        • does not provide attackers with any information prior to the attack
        • this simulates an external attacker trying to gain access to information about the business and technical environment before engaging in an attack
        • these tests are sometimes called “unknown environment” tests
  • 6.2.3 Log reviews
    • Security Information and Event Management (SIEM): packages that collect information using the syslog functionality present in many devices, operating systems, and applications
    • Admins may choose to deploy logging policies through Windows Group Policy Objects (GPOs)
    • Logging systems should also make use of the Network Time Protocol (NTP) to ensure that clocks are synchronized on systems sending log entries to the SIEM as well as the SIEM itself, ensuring info from multiple sources have a consistent timeline
    • Information security managers should also periodically conduct log reviews, particularly for sensitive functions, to ensure that privileged users are not abusing their privileges
    • Network flow (NetFlow) logs are particularly useful when investigating security incidents
  • 6.2.4 Synthetic transactions
    • Synthetic transactions: scripted transactions with known expected results
    • Dynamic testing may include the use of synthetic transactions to verify system performance; synthetic transactions are run against code and compare out to expected state
  • 6.2.5 Code review and testing
    • Code review and testing is “one of the most critical components of a software testing program”
    • These procedures provide third-party reviews of the work performed by developers before moving code into a production environment, possibly discovering security, performance, or reliability flaws in apps before they go live and negatively impact business operations
    • In code review, AKA peer review, developers other than the one who wrote the code review it for defects
    • Fagan inspections: the most formal code review process follows six steps: 1) planning 2) overview 3) preparation 4) inspection 5) rework 6) follow-up
    • Static application security testing (SAST): evaluates the security of software without running it by analyzing either the source code or the compiled application
    • Dynamic application security testing (DAST): evaluates the security of software in a runtime environment and is often the only option for organizations deploying applications written by someone else
  • 6.2.6 Misuse case testing
    • Misuse case testing: AKA abuse case testing - used by software testers to evaluate the vulnerability of their software to known risks
    • In misuse case testing, testers first enumerate the known misuse cases, then attempt to exploit those use cases with manual or automated attack techniques
  • 6.2.7 Test coverage analysis
    • A test coverage analysis is used to estimate the degree of testing conducted against new software
    • Test coverage = number of use cases tested / total number of use cases
      • requires enumerating possible use cases (which is a difficult task), and anyone using test coverage calcs to understand the process used to develop the input values
    • Five common criteria used for test coverage analysis:
      • branch coverage: has every IF statement been executed under all IF and ELSE conditions?
      • condition coverage: has every logical test in the code been executed under all sets of inputs?
      • functional coverage: has every function in the code been called and returned results?
      • loop coverage: has every loop in the code been executed under conditions that cause code execution multiple times, only once, and not at all?
      • statement coverage: has every line of code been executed during the test?
  • 6.2.8 Interface testing
    • Interface testing assesses the performance of modules against the interface specs to ensure that they will work together properly when all the development efforts are complete
    • Three types of interfaces should be tested:
      • application programming interfaces (APIs): offer a standardized way for code modules to interact and may be exposed to the outside world through web services
        • should test APIs to ensure they enforce all security requirements
      • user interfaces (UIs): examples include graphical user interfaces (GUIs) and command-line interfaces
        • UIs provide end users with the ability to interact with the software, and tests should include reviews of all UIs
      • physical interfaces: exist in some apps that manipulate machinery, logic controllers, or other objects
        • software testers should pay careful attention to physical interfaces because of the potential consequences if they fail
  • 6.2.9 Breach attack simulations
    • Breach and attack simulation (BAS): platforms that seek to automate some aspects of penetration testing
    • The BAS platform is not actually waging attacks, but conducting automated testing of security controls to identify deficencies
    • Designed to inject threat indicators onto systems and networks in an effort to trigger other security controls (e.g. place a suspicious file on a server)
      • detection and prevention controls should immediately detect and/or block this traffic as potentially malicious
    • See:
      • OWASP Web Security Testing Guide
      • OSSTMM (Open Source Security Testing Methodology Manual)
      • NIST 800-115
      • FedRAMP Penetration Test Guidance
      • PCI DSS Information Supplemental on Penetration Testing
  • 6.2.10 Compliance checks
    • Orgs should create and maintain compliance plans documenting each of their regulatory obligations and map those to the specific security controls designed to satisfy each objective
    • Compliance checks are an important part of security testing and assessment programs for regulated firms: these checks verify that all of the controls listed in a compliance plan are functioning properly and are effectively meeting regulatory requirements

6.3 Collect security process data (e.g. technical and administrative) (OSG-9 Chpts 15,18)

  • 6.3.1 Account management
    • Preferred attacker techniques for obtaining privilege user access include:
      • compromising an existing privileged account: mitigated through use of strong authentication (strong passwords and multifactor), and by admins use of privileged accounts only for specific tasks
      • privelege escalation of a regular account or creation of a new account: these approaches can be mitigated by paying attention to the creation, modification, and use of user accounts
  • 6.3.2 Management review and approval
    • Account management reviews ensure that users only retain authorized permissions and that unauthorized modifications do not occur
    • Full review of accounts: time-consuming to review all, and often done only for highly privileged accounts
    • Organizations that don’t have time to conduct a full review process may use sampling, but only if sampling is truely random
    • Adding accounts: should be a well-defined process, and users should sign AUP
    • Adding, removing, and modifying accounts and permissions should be carefully controlled and documented
    • Accounts that are no longer needed should be suspended
    • ISO 9000 standards use a Plan-Do-Check-Act loop
      • plan: foundation of everything in the ISMS, determines goals and drives policies
      • do: security operations
      • check: security assessment and testing (this objective)
      • act: formally do the management review
  • 6.3.3 Key performance and risk indicators
    • Key Performance Indicator (KPIs): measures that provide significance of showing the performance an ISMS compared to stated goals
    • Choose the factors that can show the state of security
    • Define baselines for some (or better yet all) of the factors
    • Develop a plan for periodically capturing factor values (use automation!)
    • Analyze and interpret the data and report the results
    • Key metrics or KPIs that should be monitored by security managers may vary from org to org, but could include:
      • number of open vulns
      • time to resolve vulns
      • vulnerability/defect recurrence
      • number of compromised accounts
      • number of software flaws detected in pre-production scanning
      • repeat audit findings
      • user attempts to visit known malicious sites
    • Develop a dashboard of metrics and track them
  • 6.3.4 Backup verification data
    • Managers should periodically inspect the results of backups to verify that the process functions effectively and meets the organization’s data protection needs
      • this might include reviewing logs, inspecting hash values, or requesting an actual restore of a system or file
  • 6.3.5 Training and awareness
    • Training and awareness programs play a crucial role in preparing an organization’s workforce to support information security programs
    • They educate employees about current threats and advise them on best practices for protecting information and systems under their care from attacks
    • Program should begin with initial training designed to provide foundation knowledge to employees who are joining the org or moving to a new role; the initial training should be tailored to an individual’s role
    • Training and awareness should continue to take place throughout the year, reminding employees of their responsibilities and updating them on changes to the organization’s operating environment and threat landscape
    • Use phishing simulations to evaluate the effectiveness of their security awareness programs
  • 6.3.6 Disaster Recover (DR) and Business Continuity (BC)
    • Business Continuity (BC): the processes used by an organization to ensure, holistically, that its vital business processes remain unaffected or can be quickly restored following a serious incident
    • Disaster Recovery (DR): is a subset of BC, that focuses on restoring information systems after a disaster
    • These processes need to be periodically accessed, and regular testing of disaster recovery and business continuity controls provide organizations with the assurance they are effectively protected against disruptions to business ops
    • Protection of life is of the utmost importance and should be dealt with first before attempting to save material things

6.4 Analyze test output and generate report (OSG-9 Chpt 15)

  • Step 1: review and understand the data
    • The goal of the analysis process is to proceed logically from facts to actionable info
    • A list of vulns and policy exceptions is of little value to business leaders unless it’s used in context, so once all results have been analyzed, you’re ready to start writing the official report
  • Step 2: determine the business impact of those facts
    • Ask “so what?”
  • Step 3: determine what is actionable
    • The analysis process leads to valuable results only if they are actionable
  • 6.4.1 Remediation
    • Rather than software defects, most vulnerabilities in average orgs come from misconfigured systems, inadequate policies, unsound business processes, or unaware staff
    • Vuln remediation should include all stakeholders, not just IT
  • 6.4.2 Exception handling
    • Exception handling: the process of handling unexpected activity, since software should never depend on users behaving properly
      • “expect the unexpected”, gracefully handle invalid input and improperly sequenced activity etc
    • Sometimes vulns can’t be patched in a timely manner (e.g. medical devices needing re-accreditation) and the solution is to implement compensitory controls, document the excpetion and decision, and revisit
      • compensitory controls:measures taken to address any weaknesses of existing controls or to compensate for the inability to meet specific security requirements due to various different constraints
      • e.g. micro-segmentation of device, access restrictions, monitoring etc
    • Exception handling may be required due to system crash as the result of patching (requiring roll-back)
  • 6.4.3 Ethical disclosure
    • While conducting security testing, cybersecurity pros may discover previously undiscovered vulns (perhaps implementing compensating controls to correct) that they may be unable to correct
    • Ethical disclosure: the idea that security pros who detect a vuln have a responsibility to report it to the vendor, providing them with enough time to patch or remediate
      • the disclosure should be made privately to the vendor providing reasonable amount of time to correct
      • if the vuln is not corrected, then public disclosure of the vuln is warrented, such that other professionals can make informed decisions about future use of the product(s)

6.5 Conduct or facilitate security audits (OSG-9 Chpt 15)

  • 6.5.1 Internal
    • Having an internal team conduct security audits has several advantages:
      • understanding of the internal environment reduces time
      • an internal team can delve into all parts of systems, because they have insider knowledge
      • internal auditors can be more agile in adapting to changing needs, rescheduling failed assessment components quickly
    • Disadvantages of using an internal team to conduct security audits:
      • the team may have limited exposure to new/other methodologies (e.g. the team may have depth but not breadth of experience and knowledge)
      • potential conflicts of interest (e.g. reluctance to throw other teams under the bus and accurately report their findings)
      • audit team members may start with an agenda (say to secure funding) and overstate faults, or have interpersonal motives
  • 6.5.2 External
    • An external audit (sometimes called a second-party audit) is one conducted by (or on behalf of) a business partner
    • External audits are tied to contracts; by definition, an external audit should be scoped to include only the contractual obligations of an organization
  • 6.5.3 Third-party
    • Third-party audits are often needed to demonstrate compliance with some government regulation or industry standard
    • Advantages of having a third-party audit an organization:
      • they likely have breadth of experience auditing many types of systems, across many types of organizations
      • they are not affected by internal dynamics or org politics
    • Disadvantage of using a third-party auditor:
      • cost: third-party auditors are going to be much more costly than internal teams; this means that the organization is likely to conduct audits as frequently
      • internal resources are still required to assist or accompany auditors, to answer questions and guide