List of questions
How well does machine learning anomaly detection work in practice in cyber security?
Whilst a lot of academic literature has been produced in detecting anomalies in the cyber security domain (e.g. from network traffic), how well do the methods or systems work in the 'real world'? This is an important topic since anomaly detection in cyber security poses several challenges that are not all found in other applications of machine learning. E.g. the threat of adversaries, the problems of large amounts of false positives or false negatives, lack of training data, lack of evaluation methods.
How can we ensure that systems utilising AI for novel threat (anomaly) detection do not produce high numbers of false positives requiring human analysis, and how can the sensemaking and triage process of such alerts be improved?
The last thing security operations want is another tool alerting them to things and taking more of their time to understand. Unsupervised anomaly detection is likely to be rife with false positives (suspicious but not malicious) but without human input is unlikely to improve.
How can we ensure that systems utilising AI for novel threat (anomaly) detection are not themselves prone to attack?
This could be a broader question of how to engineer AI that is secure, but for cyber defences using AI we certainly want a level of confidence that the detection techniques aren't able to be easily subverted. AI also creates a ripe extension to the attack surface that is likely to be exploited by cyber criminals.
To what extent is it feasible to develop automated responses to detected (potential) cyber threats?
Particularly when thinking about AI-driven attacks, rapid real-time responses are likely to be required. This thought it around going beyond simple systems of response automation (e.g. rule-based) and could include optimising novel defensive strategies, action with uncertainty, continuous adaptation and learning etc.
What is the balance between IT spend on external risk vs internal risk as a balance.
IT budgets are often spent on firewalls, end point protection, mobility and cloud with various types of use case. A developer or admin, maliciously or in error can cause havoc through over privileged activity. Yet little or no budget is assigned to the internal threat outside of user identity or two factor which does not resolve this. Where does it lie on the radar of a CISO or architect following the Capital One demonstration of the risk of over-privilege?
How can security organisations, universities and security vendors share and collaborate on threat intelligence in a more efficient manner?
Organisations are continually on the back foot when it comes to threat intelligence, collaboration between organisations is limited. We would be keen to discuss how this could be improved to allow organisations to start getting a foothold in protecting themselves against known and unknown threats.
How can security vendors assist with improving security awareness within organisations and students?
Training is a critical problem across all sectors, employee / student training is critical as a first line defence against phishing attacks, click bait etc. We would be keen to discuss how we, as a group, could better educate all parts of society.
How do we reduce alert fatigue/burn-out in a managed security services provider?
We have a security operations centre, where companies outsource their network security to us. As a result of monitoring, rules are very generalised which can create a lot of “noise” where analysts end up with many alerts that can’t be actioned due to service restrictions. This may lead to a burn-out where analysts are more likely to class a ticket as “noise”, which may result in some potentially legitimate alerts being missed. We would like to address this on a technical level and find a balance between convenience and confidentiality – understanding that tailoring alerts per customer would address this, however this is not always possible.
What is the best way to visualise cybersecurity for a layperson?
Autonomous vehicles are going to need to interact with the general public. The security of these vehicles is paramount, however, the public needs to know how and why (from a security perspective) the machine is acting in some way, and to instruct intervention if necessary. What is the best visual way to get this across?
How can we develop, assign, manage and certify trust as a a key business attribute, commodity and enabler
As part of developing a highly adaptive, ad hoc supply chain, trust becomes a major attribute to facilitate rapid interaction between commercial organisations. If it is to become such a central element, how can we assign it, manage it and govern it?
In this increasing age of inter connectivity, AI, robotics and autonomous vehicles many communities feel vulnerable in the virtual space. How can multi sectoral organisations and agencies improve this position?
To which extend will AI bring regulation to the digital security industry?
How will AI "deep fakes" cause problems in authentication methods that rely on biometrics like voice or face recognition, and how can it be prevented?
Services like telephone banking and insurance claims are already using some of the latest advancements in voice technology to identify customers; meanwhile voice activated assistants are building up large libraries of recordings. An engineering company was recently scammed with a deep fake of the CEO's voice. What is the likely impact of AI used like this for social engineering and fraud, and what mitigations are possible?
How is data privacy regulation impacting the activity scope of the CISO office, particularly after the massive breaches at BA and Marriott that resulted in 9-figure fines?
How can CISO's effectively collaborate with IT and legal to understand what data they have and how that data is managed, and, if necessary, put additional privacy-specific controls or carry out data minimisation projects?
How do we encourage adoption of good cyber security practices in the SMB market where there might not be the budget/ability to derive the fullest value?
We feel the challenge of cyber security in the UK is raise the strength the herd against commodity cyber attacks but most products seem to be aimed at enterprise clients with big budgets and technical know-how.