In a rapidly transforming threat landscape, cyber defense solutions must be both innovative and flexible to harden organizational security against ever-evolving adversarial attacks. While current signature detection techniques effectively combat known attack structures, they are inherently reactive and require significant time to respond to sophisticated attacks. CSET Releases Research Agenda for Cybersecurity and AI The brief, A National Security Research Agenda for Cybersecurity and Artificial Intelligence by Director of the Cybersecurity and AI Project Ben Buchanan, walks policymakers through the “machine learning paradigm of artificial intelligence” with a focus on the unknowns of ML offense, defense, adversarial learning, and more. Buchanan recommends a thorough study of the ML “kill chain,” or steps that hackers will use to achieve their goals, to help network defenders and software engineers find and remediate possible vulnerabilities. “One of the foremost national security questions at the intersection of cybersecurity and AI is the degree to which machine learning will reshape or supercharge this kill chain,” he wrote. “There are reasons for concern, but also reasons to think present-day automation—not using machine learning techniques—is already effective in human-machine teams.” He also asks policymakers to reflect on other offensive issues, such as how ML could be used to tailor and scale spear-phishing attempts and make cyber capabilities more powerful. Buchanan goes on to address the defense use of ML, specifically how it could be used to detect malicious code or more effectively attribute cyberattacks. “If machine learning can improve detection, interdiction, and attribution, it can dramatically reduce the potential dangers of cyber operations,” the brief states, but clarifies that evaluation should be grounded in practical and measurable results. Buchanan also addresses the concern that just like traditional computer systems, ML comes with its own weaknesses such as software bugs and fundamental vulnerabilities that provide hackers with new opportunities to exploit the system. He wrote that policymakers should be prepared to consider how ML can be secured against these attempts at deception or unintentionally reveal secrets if trained with classified data. While most of the brief focuses on the technical questions associated with ML application, Buchanan concludes by asking policymakers to keep in mind other overarching questions about the relationship between AI and national security. “Moreover, at least in the near term, machine learning capabilities will add complexity to traditional attack vectors, raising the risks that cyber operators may adopt machine learning features without fully understanding their inner workings or potential effects,” he cautions. Based on our experience as an industry provider of cybersecurity and AI solutions, we believe these five steps will help organizations operationalize AI into their cybersecurity technology systems, business practices and mission operations. 1. Consider Goals and Risks For those organizations ready to accept risks that are both known and unknown, AI offers powerful potential for predictive insight, precisely targeted resource allocation and proactive approaches to securing your security posture. It can also help augment security teams, reduce analyst “alert fatigue,” and provide advanced detection capabilities. However, risks must be properly identified and viewed as opportunities, rather than barriers to success or failures. 2. Establish the Foundation AI offers powerful potential for augmenting existing cybersecurity tools beyond traditional signature-based approaches and offers a mechanism for the rapid validation and prioritization of threats. However, understanding the basics of the network are essential for success, specifically in the areas of visibility, governance, storage, and processing and workflows. 3. Understand the Human Element AI complements human effort by supporting analysts in reducing errors, accelerating analysis and automating labor-intensive tasks. But the human element of AI also poses a variety of challenges organizations must grapple with before launching an AI cybersecurity initiative. Namely, will the organization be able to sufficiently staff the initiative to generate an ROI? An affirmative answer is not a given. For starters, working with AI requires an unusual blend of skills. Furthermore, a substantial gap exists between the demand for trained cybersecurity workers and the supply of trained applicants. More than half of organizations have failed to begin or further their AI implementation efforts due to the lack of sufficiently trained staff, according to Gartner. Outsourcing is one solution for “minding the gap,” but puts an organization at risk of losing intellectual property and institutional knowledge when the contract ends. In-house training can be expensive and requires competencies beyond the reach of many organizations. 4. Focus on Use Cases One big question that emerges with any new technology initiative, including and beyond cybersecurity and AI, is: what areas can organizations make more efficient, and what is the ROI? Another big question for those considering AI in cybersecurity: where do I start? To reduce risk and increase the success of your AI implementation, we believe organizations should focus on implementing AI-based use cases as the first step in any broader AI adoption initiative. In cybersecurity, there is no shortage of good places to start. Pairing a compelling technical problem with an organization’s particular network features and strategic aims is, therefore, a fruitful paradigm for generating and evaluating potential use cases. 5. Automate and Orchestrate for Quick ROI Once the first four steps have been taken and organizations better understand how AI can increase the ROI across strategic goals, the next step is to automate processes and allow analysts to refocus their efforts. Nearly 75 percent of organizations understand the benefits of automation, but almost a third of organizations have failed to implement resource saving automation initiatives. Areas such as threat intelligence collection, vulnerability scoring, phishing email header analysis and traffic patterns are prime areas to begin both AI use cases and automation for a quick ROI. Conclusion In the current cybersecurity environment, adversaries are employing increasingly sophisticated algorithms and diversified methods, blacklists, rules- and behavior-based cyber operations. Traditional, reactive measures are no longer enough. Organizations need to quickly identify where intrusions occurred, the likely attack vectors moving forward, and how to quickly remediate exploited vulnerabilities – all in a shortened window of response time. Reference: securitymagazine / meritalk
0 Comments
Leave a Reply. |
Categories
All
A knowledge platform established for industry
|