Veronica Martin caught up with Maher Yamout, Senior Security Researcher at Kaspersky during GISEC 2023 to discuss the products and solutions they are showcasing this year, their plans for the region and how ChatGPT is being used by cyber criminals.
What products/solutions are you showcasing at GISEC this year?
We are trying to promote again and reemphasize on cyber immunity and the evolution of cyber threats because they are growing exponentially and are covering the vulnerabilities that we already have with inherited vulnerable systems. You don’t need to apply security controls or antivirus in the first place. We didn’t rely on any current operating system, so we created our own based on our long experience with cybersecurity.
Why GISEC is important for your company?
GISEC is one of the events that has been there for a long time and all security vendors and major tech companies come here to provide their best solutions and showcase what they have in terms of capabilities, products and services.
What are your plans for the region in 2023?
Our plans are keeping up with the cyber threats, protecting the government institutions, corporate environments, as well as the end users. Also, we want promote more cyber immunity and the products we are selling that adopt that approach.
What are some of the leading security trends companies should watch out for this year?
I think supply chain is a critical issue because things are getting complicated in terms of infrastructure. You will see a lot of companies are relying on other companies and those companies are relying on other ones. This makes complicated to detect cyber threats because they can exploit the supply chain and get into their target through multiple dividends.
This is a tricky part that everyone should be aware of. We are also seeing a lot of events, new technologies, such as AI, that can be used potentially by threat actors to certain extent. We usually see certain spikes in cyber threats whenever a new major event or cyber technology appears.
How ChatGPT is being used by cyber criminals?
Cyber criminals can use change in multiple ways, but we need to know that AI language models have certain limitations. It’s a query-based system. You ask the question and comes back to you with an answer, but the answer is not necessarily a bulletproof one. It has lot of issues, it’s not complete in many cases, so you need to have a human to check it.
That means also any new joiner in the field can create any sophisticated virus which it doesn’t help. You need to be a professional to understand these things and know what to expect even from the answer. Therefore, you’ll ask ChatGPT to create certain functions and the human part will validate that function and will connect it with another one. Chat or language-based models cannot do everything on their own.