"Humanity can enjoy a flourishing future with AI … Let's enjoy a long AI summer, not rush unprepared into a fall."
Those were the cautionary closing lines of a recent open letter from the Future of Life Institute. The letter called for a pause in the training of AI systems more powerful than GPT-4 (the AI system that powers ChatGPT). Google CEO, Sundar Pichai, has also warned about AI, stating that "It can be very harmful if deployed wrongly … and the technology is moving fast", while Elon Musk has cautioned that AI could lead to "civilization destruction". Of course, the same tech giants have contributed to the AI arms race by shifting corporate strategies and investing billions of dollars in a very short period of time, leading to an unprecedented proliferation of AI technologies.
Although a world-ending AI event like the one envisaged in 'The Terminator' may be far off, there are serious commercial consequences stemming from the abundance of AI tools which businesses should be aware of. One important issue is the ability of AI to enhance online IP crime.
As was highlighted at this year's Regional IP Crime Conference in Dubai (organised in cooperation with Interpol), the proliferation of AI tools makes it more challenging to tackle IP crime. Indeed, the European Union Intellectual Property Office (EUIPO) recently outlined how AI can be used to boost IP crime and circumvent certain safeguards. The following areas stand out as being particularly concerning for businesses. However, whilst AI is used to enhance criminal activities, the same technology can also be used to stop them.
Marketing and distribution of counterfeit goods
Counterfeiters are able to use computer vision (enabling computers to identify and understand objects in images and videos) and natural language processing (enabling computers to understand text and spoken words) to scan online listings, identify a rightsholder's most popular goods and sell counterfeit versions of the goods online. They can also create a fake online persona using virtual/augmented reality to promote the counterfeit goods. In addition, AI can help criminals import counterfeit goods by more easily creating shell corporations, identifying ports where customs seizures are least likely to occur, optimising concealment of goods in shipping containers and finding the quickest trade routes to ports of entry.
Live streaming of copyright protected digital content
Machine learning is being used by those involved in piracy to scan websites and identify links to infringing streams of live events that people can access for free. It can then automatically post all these links on an aggregator website, which can be used to obtain revenue from advertising on those websites. Whilst not a new issue, AI-enabled IPTV apps which circumvent technological protection measures can also be used to enable users to avoid paying television subscription fees and to watch television programming for free.
Distribution of copyright protected digital content
Decentralised file sharing networks connected to a cryptocurrency blockchain are allowing distribution of unauthorised copies of copyright-protected films and TV shows. Criminals can sell premium accounts on the network where users pay to store files. Whilst this is an existing problem, AI is being used to eliminate the technological protection measures (i.e. digital dots and watermarks) that are added to films to track unauthorised copying and distribution.
Theft of a company's trade secrets
AI can be used to identify and target the weakest employee in the target company in order to elicit a company's trade secrets and other confidential information. Using AI reconnaissance (which learns communication style from social media activity), an alias of a trusted employee of a target company can be created. Natural language processing can be used to write phishing emails. Deepfakes of an employee’s voice can be used to make telephone calls to colleagues.
IP rights registration and services fraud
Fraudsters can use generative adversarial networks (which create new data resembling existing data) and computer vision tools to produce a fake application replicating genuine applications filed by the creator of a mark. This enables criminals to fraudulently register a trade mark as their own when in fact the mark was created by someone else. Further, machine learning with pattern recognition and computer vision tools are used to produce a fake re-registration or renewal invoice to trick rightsholders into paying fraudulent invoices tools.
Cybersquatting and typosquatting
Machine learning can be used to quickly identify the most popular brand name in companies that have not yet registered as domain names. Fraudsters then register, or ‘squat', on these domain names and exploit them in a number of ways. For example, they can sell counterfeit goods via those domain names.
What can be done to counter these threats?
As criminals accumulate an AI arsenal, so too have law enforcement agencies, who are in a constant cat-and-mouse game to prevent and enforce against online IP crime. Interestingly, many of the same AI tools can be used on both sides.
For example, law enforcement can use computer vision for recognising infringement patterns, predicting future infringements, detecting the marketing of infringing goods, and detecting and analysing fraudulent logos. Authorities can also use natural language processing to identify and block phishing attacks, analyse fraudulent behaviour, and quickly recognize infringements. Machine learning can be used to detect fake online content, improve content recognition tools, and identify infringement patterns.
In addition, expert systems, which solve complex issues and imitate human decision-making, can be used by authorities to identify the best strategy for protecting a system from specific vulnerabilities.
The majority of these crimes are not new. However, whereas criminals of the past were forced to use basic software or even manual techniques to carry out their activities, modern offenders benefit from the anonymity, scalability, speed and user-friendliness of AI to enhance the frequency and effectiveness of their crimes.
Whilst international conferences facilitate cross-border cooperation, more knowledge-sharing is required by law enforcement agencies to seriously tackle these offences globally. Authorities, as well as businesses, must also upskill their workforce, provide training and familiarise themselves with various AI technologies to be able to effectively fight online criminals in the field of IP.
On a national level, the UK Government published its AI white paper last month, which sets out its proposed framework for regulating AI while encouraging innovation and unleashing the benefits of AI. As we discussed in our review of the white paper, it proposes high-level overarching principles for AI regulation, but no new legislation and no new regulator (though it may in the future introduce a statutory duty for existing regulators to have due regard to these principles). It also proposes that regulators work together to produce joint guidance for businesses, to encourage clarity and make it easier for businesses to comply whilst still developing innovative products. As noted in our report, the white paper's approach is flexible and pragmatic but lacks certainty. Furthermore, commentators have remarked that the white paper does not deal with important issues relating to the allocation of liability for AI, the risk of overlapping regulatory jurisdictions and the uneven enforcement powers across the different regulators.
It is encouraging to see that experts, authorities and governments are considering how best to manage the growth of AI. At the same time, businesses must also be proactive and take appropriate steps to ensure that they can prevent, or at least mitigate the risk of increasingly sophisticated and often complex cyber-attacks.
Click here for more information on our cyber security and investigations practice, MDR Cyber.