Executive Viewpoint Archives - TechInformed https://techinformed.com/category/opinion/executive-viewpoint/ The frontier of tech news Thu, 12 Sep 2024 10:05:15 +0000 en-US hourly 1 https://i0.wp.com/techinformed.com/wp-content/uploads/2021/12/logo.jpg?fit=32%2C32&ssl=1 Executive Viewpoint Archives - TechInformed https://techinformed.com/category/opinion/executive-viewpoint/ 32 32 195600020 The double-edged sword of Generative AI in creative workflows https://techinformed.com/gen-ai-double-edged-sword-of-generative-ai-in-creative-workflows/ Thu, 12 Sep 2024 10:05:15 +0000 https://techinformed.com/?p=25761 Generative AI has emerged as a transformative force across various industries. It can produce content from written articles to digital art, heralding a new era… Continue reading The double-edged sword of Generative AI in creative workflows

The post The double-edged sword of Generative AI in creative workflows appeared first on TechInformed.

]]>
Generative AI has emerged as a transformative force across various industries. It can produce content from written articles to digital art, heralding a new era of creative workflows. For sectors such as gaming, generative AI offers unprecedented opportunities to enhance productivity and fuel innovation. But as we embrace these advancements, it’s crucial to address the inherent risks, too—particularly the potential displacement of human artists and the ensuing impact on job opportunities.

 

The growing potential of generative AI

 

Generative AI tools, such as ChatGPT, Claude, and GitHub CoPilot, have demonstrated their remarkable capabilities in creating human-like text, stunning visuals, and even music compositions. This means that anyone, anywhere, can recreate the artwork using tools like Midjourney or even get their favourite singers to cover alternative music outside of their repertoire.

In addition to satisfying our curiosities, these tools also help reduce the time and effort required to produce high-quality content at work. For example, in the gaming industry, AI can assist in generating detailed environments, character designs, and dialogue, allowing game developers to focus more on storytelling and gameplay mechanics.

The use of generative AI can also help to democratise creativity. Independent creators and smaller studios, rather than big blockbusters, often need more resources than their larger counterparts. However, with generative AI, they can produce professional-grade content. This can level the playing field, fostering a more diverse and vibrant ecosystem. But there are risks to consider, too.

 

Putting the risks of automation under the microscope

 

The same attributes that make generative AI appealing are also presenting risks. One of the most significant concerns is that human workers will no longer be needed, substantially reducing the workforce. As AI becomes more proficient, companies may streamline their creative departments, relying more on algorithms and less on human talent. This could lead to job displacement for entry-level roles and seasoned professionals who find their skills no longer desirable or living up to a machine’s production.

The impact on job opportunities extends beyond direct employment. The gaming industry, for example, has long been a vibrant community where artists, designers, and developers collaborate and create captivating experiences for players. Reducing the number of these professionals could stifle the collaborative spirit between them all, leading to a halt in creativity as we know it. In short, the diversity of human experience, talent, and emotion – vital to creating resonating and impactful art — might be lost if AI-generated content becomes the norm.

 

AI: Fighting for our livelihoods?

 

Proactive measures for a balanced future

 

To harness AI’s benefits while mitigating its risks, a multifaceted approach is needed. Firstly, there must be a commitment to ethical AI development. This includes ensuring transparency in how AI tools are used and human oversight remaining integral to the process.

In addition to this, education and continuous learning are essential. As AI reshapes creative workflows, the skills required in the industry will evolve. Investing in training programs that help game designers and artists adapt to new tools and techniques can ensure that the workforce remains relevant and competitive—ultimately making it harder for employers to shoo them out of the door and be replaced by machines. This could involve integrating AI literacy into art and design curriculums, fostering a new generation of creators who are as comfortable with generative AI as traditional tools.

Lastly, businesses should adopt a balanced approach to workforce management. Rather than looking at AI as a means to cut costs, companies can view it as an opportunity to enhance existing human capabilities. By maintaining robust teams that combine human creativity with AI efficiency, businesses can drive innovation while preserving the jobs and livelihoods of their employees. This approach will lead to more dynamic outputs, blending human ingenuity with machine precision.

 

Looking at the future of creativity

 

There’s no denying that generative AI holds immense potential to revolutionise creative workflows. However, it poses significant risks to job opportunities and the essence of human creativity without careful consideration and proactive measures to safeguard existing talent.

By embracing ethical AI development, investing in education, and adopting balanced workforce strategies, we can navigate this landscape without making mistakes that will endanger us all. AI isn’t expected to go anywhere and will only continue to advance its capabilities. So, being mindful of the opportunities and risks it carries with it means we can ensure that the future of creativity remains vibrant, inclusive, and profoundly human.

ChatGPT can help workers, instead of replacing them — read the article here.

The post The double-edged sword of Generative AI in creative workflows appeared first on TechInformed.

]]>
25761
Who needs interns, when you have AI? https://techinformed.com/who-needs-interns-when-you-have-ai/ Mon, 09 Sep 2024 14:25:43 +0000 https://techinformed.com/?p=25675 GenAI tools like ChatGPT and Gemini have created endless excitement in the tech world as their potential to transform working lives continues to be explored.… Continue reading Who needs interns, when you have AI?

The post Who needs interns, when you have AI? appeared first on TechInformed.

]]>
GenAI tools like ChatGPT and Gemini have created endless excitement in the tech world as their potential to transform working lives continues to be explored. However, there have been equal concerns over what happens when these models malfunction. Examples of AI getting things wrong are already making the headlines, including incidents at Air Canada and Google.

Quirks like these demonstrate how important it is for organisations to weigh up the strengths and weaknesses of AI before applying it to any aspect of business. Without forethought, companies risk embarrassing or even disastrous consequences.

At the same time, it should be appreciated that GenAI is only in its infancy, yet to mature to its full-blown potential over the next few years. However, even in these early stages of development, it still has many strengths, provided implementations are thought through carefully.

AI strengths

 

The tax industry is a prime example of what’s already feasible with the current iterations.

Looking on the positive side, AI works day in, day out, never getting tired or stressed, carrying out tasks at a vast scale. It is extremely efficient at the mundane and repetitive jobs that people tend to generally dislike and are highly time-consuming.

Take a task like analysing ledger data for VAT purposes; in some instances this can be millions of rows. There aren’t enough interns you could throw at the task of reviewing every row, yet AI can analyse this kind of dataset in seconds – making it the ‘infinite intern’.

Similarly, fast automation of routine tasks, like data entry, number-crunching and anomaly detection, are a piece of cake for AI. Well-suited to these types of activities, AI churns through data processing quickly, constantly, and reliably.

Its consistency is a key strength. Unlike employees, who may not always be objective, AI algorithms stick to the rules, applying them in the same way on each occasion. Whereas evidence shows an individual’s decision-making abilities and performance can vary significantly owing to factors such as hunger, fatigue, workload, and stress.

Even the time of day can make a difference to someone’s reasoning powers, as highlighted by a recent study. It found workers are less active and more prone to making mistakes on afternoons and Fridays, with Friday afternoon representing the lowest productivity point.

AI can also extract valuable insights from huge volumes of disparate tax and financial data that would take a person days or weeks to compile and interpret. Predictive analytics powered by AI can forecast trends, model different outcomes based on complex tax scenarios, and uncover potential compliance issues.

On this basis, AI sounds like a compelling choice for routine work and mass data crunching, at the very least. However, it’s not all plain sailing as AI tools are only as good as the training they receive. It’s a case of garbage in, garbage out.

AI weaknesses

 

If the data used to inform an algorithm is inaccurate, this will detrimentally affect the results it provides. This is how errors occur and biases creep in, where outcomes are at best misleading or, at worst, completely wrong.

They can also lack the capability to interpret important context, and miss subtleties which humans easily take into account. The end result can be spurious responses and hallucinations, where AI misinterprets data and fabricates answers. Fortunately, these issues can be rectified as AI does respond well to constant training, but this can take time.

It boils down to having the right monitoring, evaluation, and re-training in operation. AI tools shouldn’t be left to act on their own without proper oversight, and outputs should be sanity checked by humans.

The future for interns

 

So, what does the rise of AI mean for interns?

For tech-savvy generations, like Gen Z, the future for interns in the tax industry looks bright. Having grown up using technology throughout their lives, they expect to find technical innovation in the workplace. Indeed, many consider it a must-have when choosing a career path. The finance and tax industry has a massive opportunity to tap into this mindset to encourage new talent into the industry.

By harnessing AI to do the mundane work as the ultimate ‘infinite intern’, it can support new graduates and tax assistants rather than replacing them. Instead of spending most of the early part of their careers on traditionally laborious work, human interns will check and review information already processed and analysed by AI.

Thus, freeing up time to hone their accountancy skills more quickly. And, then use the insights that AI uncovers for more satisfying work, usually only possible much later in their tenure, such as strategic planning, problem solving and value-added decision-making for the business or its clients.

GenAI will continue to improve. But, for the time being, assuming it can mimic the expertise of a senior level decision maker is asking for trouble. However, if deployed with due diligence, AI can bring much needed efficiency and valuable insights to financial data processing. It will help to attract a forward-thinking generation of tax professionals looking for careers that champion technical innovation and new ways of working.

The post Who needs interns, when you have AI? appeared first on TechInformed.

]]>
25675
Why the AI productivity revolution should enhance, not replace the workforce https://techinformed.com/why-the-ai-productivity-revolution-should-enhance-not-replace-the-workforce/ Mon, 02 Sep 2024 16:05:16 +0000 https://techinformed.com/?p=25515 It’s been well documented that, since the 2008 financial crisis, productivity in the UK has stagnated, failing to regain the upward momentum that once fuelled… Continue reading Why the AI productivity revolution should enhance, not replace the workforce

The post Why the AI productivity revolution should enhance, not replace the workforce appeared first on TechInformed.

]]>
It’s been well documented that, since the 2008 financial crisis, productivity in the UK has stagnated, failing to regain the upward momentum that once fuelled economic prosperity.

Despite advances in technology, the anticipated growth in workplace efficiency has not materialised. However, the tide may be turning with the emergence of artificial intelligence. According to a widely published report by Workday, AI has the potential to unlock an astounding £119 billion in annual productivity across UK enterprises.

But as promising as AI is, it’s crucial to recognise that it is not a silver bullet. AI can significantly enhance productivity, but business leaders must approach its adoption with a comprehensive, responsible strateg

 Empowerment not replacement

 

There is a common misconception that AI will lead to widespread job losses, replacing human workers with machines but AI should be viewed as a productivity enhancer rather than a job eliminator.

In the same way Microsoft tools have become indispensable in modern workplaces, AI can take over mundane, repetitive tasks, freeing up employees to focus on more meaningful, impactful work. This shift allows workers to engage in activities that require creativity, problem-solving, and human interaction — areas where AI cannot compete.

The UK’s productivity gap — 24% lower than it would have been if pre-2008 trends had continued — highlights the need for innovative solutions. AI presents a unique opportunity to close this gap by automating routine processes, reducing errors, and enabling faster decision-making. However, to realise this potential, AI must be integrated thoughtfully into the workplace, with an emphasis on enhancing human capabilities rather than replacing them.

While the potential of AI is clear, its adoption has been slow, primarily due to concerns over safety, privacy, and bias. These fears are not unfounded, as the deployment of AI in business processes comes with risks that need to be carefully managed.

Trust in AI is critical for its successful implementation. Employees and business leaders alike need to be confident that AI systems are reliable, transparent, and aligned with business goals.

To build this trust, businesses must prioritise responsible AI strategies. This involves more than just implementing the latest technologies; it requires a commitment to transparency, explainability, and continuous education. Employees should be well-informed about how AI systems work, what data they use, and how decisions are made. This transparency is key to dispelling fears and ensuring that AI is seen as a supportive tool rather than a threat.

Leadership-drive AI

 

AI alone is not enough to drive the productivity gains the UK needs. Business leaders must take a proactive role in guiding their organisations through the AI revolution. This starts with a clear analysis of the specific efficiencies AI can deliver and the development of a transparent strategy for its adoption. Leaders must also address the cultural barriers to AI integration, such as resistance to change and lack of trust.

Five best practices to protect your data privacy while using Gen AI

Moreover, employee motivation and engagement are critical to unlocking the full potential of AI. Unengaged employees are the biggest barrier to productivity. By leveraging AI to handle routine tasks, employees can focus on work that is more fulfilling and aligned with their skills, leading to higher engagement and, ultimately, greater productivity.

The UK stands on the brink of a significant productivity shift, with AI poised to play a central role. However, AI should not be viewed as a panacea. It is a powerful tool that can enhance productivity, but it must be implemented alongside thoughtful leadership, clear communication, and a commitment to building trust. By approaching AI adoption responsibly, businesses can not only improve productivity but also create a more motivated and engaged workforce. This balanced approach will be key to navigating the future of work and ensuring that AI serves as an enhancer, not a replacement, of human potential.

The post Why the AI productivity revolution should enhance, not replace the workforce appeared first on TechInformed.

]]>
25515
10 steps to protect your business from cyber-attacks https://techinformed.com/top-10-steps-to-protect-your-business-from-cyber-attacks/ Tue, 27 Aug 2024 16:55:44 +0000 https://techinformed.com/?p=25320 In today’s digital age, cyber-attacks pose a significant threat to businesses worldwide, with three in four companies at risk. As cyber threats evolve, safeguarding your… Continue reading 10 steps to protect your business from cyber-attacks

The post 10 steps to protect your business from cyber-attacks appeared first on TechInformed.

]]>
In today’s digital age, cyber-attacks pose a significant threat to businesses worldwide, with three in four companies at risk. As cyber threats evolve, safeguarding your enterprise from potential breaches is more critical than ever. To help protect your organisation, Dr Phil Legg, a cybersecurity expert at Independent advisor Best VPN, has compiled the top 10 proven steps to secure your business from cyber-attacks.

1. Mobile Device Management (MDM)

 

Microsoft Intune and Apple provide MDM capabilities for devices used within an enterprise environment. These capabilities allow IT administrators to manage devices in the unfortunate case of theft or loss. MDM also enables teams to ensure that devices are used for their intended business purposes and helps keep security patches up to date for individual employees.

2. Two-factor Authentication (2FA)

 

Online enterprise platforms such as Microsoft 365 and Google Workspace both support 2FA, meaning that users not only require their password to log in but also need to authenticate their login activity using a second factor, such as a mobile phone authenticator app or a physical security device. If a password is compromised, 2FA provides additional account security to protect your logins from intruders.

3. Password Management

 

Where users are required to maintain accounts for multiple online services, a password manager can help curate and store unique passwords for each service. With unique passwords for different services (websites), even if one is compromised and learnt by an attacker, other accounts are more likely to remain secure.

4. Virtual Private Network (VPN)

 

Last year alone, more than 400,000 cases of fraud and computer misuse were recorded, with 46% of UK businesses experiencing a cyber attack. Providing a secure VPN is essential for maintaining online privacy and security to protect your business from cyber-attacks. At their core, a VPN establishes an encrypted connection between your device and a remote server, keeping your internet activities private and safer from unwanted tracking.

5. Physical security

 

Ensure that employees have clear guidance on maintaining the physical security of their work assets, including laptops and other devices with sensitive information or access.

Backup and recovery: A four-step guide

6. Shoulder surfing

 

Just as physical security is critical, ensure staff are aware of the threat of shoulder surfing – where a stranger can gather your private information by secretly watching your screen. This is especially likely when working in public spaces like cafes and trains. Never reveal sensitive data, like a password or credit card information, on a laptop screen in a public space.

7. Business Continuity Planning (BCP)

 

If a widespread incident were to occur across your IT estate, would you have a plan B? How would the organisation operate without email or access to specific systems? Ensure that a BCP is in place that is both realistic and actionable, with clear guidance on how this would be implemented if necessary.

Understand the operational cost to the business if such an event should occur and assess the expected likelihood of such an event occurring. This should factor into your risk management strategy.

8. Backup & Cloud Storage

 

Understand and classify the importance of your data assets, and ensure that off-site backups are maintained regularly — especially for any data that is crucial for your business to function.

In the case of natural phenomena (e.g., earthquakes, flooding, hurricanes, etc.), consider using cloud storage to provide offsite backup. Microsoft, Google, Apple, and other third parties all offer options for this, alleviating the risk of storing data on a specific physical device.

However, before you create a backup, you should also consider the classification of data and whether the data is appropriate to be stored within a cloud environment managed by a third party.

9. E-mail usage and phishing attacks

 

Ensure that staff remain vigilant about e-mail usage and potential phishing attacks. Provide training so that staff act cautiously when deciding whether to click links from unexpected emails.

Providers such as Microsoft are constantly improving their spam recognition and phishing detection, but scrutinising your inbox is still important. If you are ever in doubt about whether an email is legitimate, consider contacting the sender via phone to confirm that the email is genuine.

10. Social media

 

Provide staff with training on using social media in the business context. Attackers can exploit LinkedIn and other platforms (including company websites) to gain knowledge about organisations.

Ensure staff remain vigilant to such threats, including the potential to be befriended by online contacts via social media and the luring of sensitive information about workplace activity.

Study reveals which U.S. states are most vulnerable to cyber-attacks — find out if your state is safe.

Ready to strengthen your business’s cybersecurity? Start implementing these top strategies today to protect your business from cyber-attacks.

The post 10 steps to protect your business from cyber-attacks appeared first on TechInformed.

]]>
25320
Bad Bots and the Premier League – How to avoid a security own goal https://techinformed.com/how-to-avoid-a-security-own-goal-premier-league-scalper-bots/ Fri, 16 Aug 2024 15:07:29 +0000 https://techinformed.com/?p=25112 As excitement for the start of the 24/25 Premier League season reaches a fever pitch, fans of the sport are no doubt clambering to get… Continue reading Bad Bots and the Premier League – How to avoid a security own goal

The post Bad Bots and the Premier League – How to avoid a security own goal appeared first on TechInformed.

]]>
As excitement for the start of the 24/25 Premier League season reaches a fever pitch, fans of the sport are no doubt clambering to get hold of tickets for key matches. However, for Liverpool FC fans, these plans were halted when a cyber-attack temporarily suspended ticket sales for members just a few weeks back.

The cyber-attack in question was a sophisticated bot attack. This incident was not isolated. Our threat intelligence team has recorded and mitigated similar attempts by scalpers to obtain highly sought-after football match tickets for other Premier League teams.

Tickets to Premier League matches are among some of the most highly sought-after in the world, so as the season kicks off, we’ll look at the growing threat of bots, their role in ticket scalping, and how clubs can ensure they have the best defences in place.

The rising bot threat

 

At its most basic form, an internet bot is a software application that runs automated tasks over the Internet. Bot-run tasks are typically simple and performed at a much higher rate than human Internet activity.

Some bots are legitimate and harmless — for example, Googlebot is an application used by Google to crawl the Internet and index it for search. Other bots are malicious, such as bots used to automatically scan websites for software vulnerabilities and execute simple attack patterns.

Almost 50% of internet traffic now comes from non-human sources, with malicious bots comprising nearly one-third of all internet traffic. These bad bots have become more advanced and evasive, mimicking human behaviour to bypass traditional security defences.

The role of bots in ticket scalping

 

Bots can also be deployed to buy up large quantities of tickets when they become available, preventing genuine fans from purchasing tickets at face value. Scalpers then resell these tickets at significantly inflated prices, exploiting the high demand for these events.

Wherever there’s high demand with a limited supply, bot operators will take advantage of the resell value. This is precisely the case with tickets to highly popular sporting events. The English Premier League is the most popular football league in the world, and malicious actors are inevitably taking advantage.

A wider analysis found that there had been a 59% increase in attacks targeting European sports websites in January and another 66% increase in March, with security incidents increasing from the previous year.

This problem doesn’t just pertain to sports events either — whether it’s highly sought-after concert tickets, game consoles, or the release of limited-edition merchandise.

Why bots can cause an own goal for businesses

 

Ticket scalping is a huge problem for any sports organisation, as it ultimately punishes genuine fans and could damage a club’s long-term reputation. However, that isn’t the only issue bots present.

They can also overload servers, causing website downtime during crucial moments like match days, which impacts fan engagement and revenue.

Additionally, bots can steal sensitive data, leading to potential breaches and loss of consumer trust. They can also inflate web traffic metrics, giving a false sense of popularity and potentially misleading advertisers. For Premier League clubs, these issues can significantly affect their global brand and fan loyalty.

Assembling the right defence formation

 

Football clubs and other sports organisations need to implement a robust multi-layered defence strategy to protect their digital ecosystems.

Just like a football team needs a solid defence to protect its goal, companies must implement an advanced bot management solution to safeguard their digital assets. This solution acts as the defensive line, using behavioural analysis, device fingerprinting, and challenge-response authentication to distinguish between legitimate users and bots, effectively blocking malicious activity.

Continuous monitoring and real-time analytics are akin to the vigilant defenders who constantly scan the field for threats. By analysing traffic patterns and user behaviour, companies can quickly identify and respond to suspicious anomalies that may signal bot interference.

Securing public and private APIs is like fortifying the defensive midfield. APIs are prime targets for bots, and protecting them requires robust authentication, rate limiting, and encryption. Regular updates and patches are essential to close any vulnerabilities that bots might exploit.

Collaboration within the industry is similar to a team working together to share intelligence about the opponent’s strategies. By establishing a shared database of known bot signatures and participating in industry-wide forums, companies can enhance their collective security and stay ahead of emerging threats.

Finally, educating customers about the risks of bots and how to recognise suspicious activity is like coaching the team to be aware of potential threats. Clear communication about security measures and best practices empowers customers to contribute to a safer online environment.

The final whistle

 

As the Premier League gears up for another thrilling season, clubs must ensure they don’t score an own goal by neglecting their digital defences.

Just as a football team relies on a strong backline to fend off attacks, clubs need a robust, multi-layered security strategy to tackle the growing threat of bots.

By implementing advanced bot management solutions, continuously monitoring for threats, securing APIs, collaborating within the industry, and educating fans, clubs can protect their digital assets and maintain the trust and loyalty of their supporters. After all, in the game of cybersecurity, a solid defence is the best offence.

 

Read: Southampton FC strengthens its defences

The post Bad Bots and the Premier League – How to avoid a security own goal appeared first on TechInformed.

]]>
25112
Innovative tactics for defeating cyber threats in 2024 https://techinformed.com/innovative-tactics-for-defeating-cyber-threats-in-2024/ Tue, 13 Aug 2024 15:46:32 +0000 https://techinformed.com/?p=24996 In 2024, cybercrime shows no signs of slowing down. Cybercriminals are targeting citizens and businesses worldwide with sophisticated, hard-to-detect scams, causing operational disruptions and damaging… Continue reading Innovative tactics for defeating cyber threats in 2024

The post Innovative tactics for defeating cyber threats in 2024 appeared first on TechInformed.

]]>
In 2024, cybercrime shows no signs of slowing down. Cybercriminals are targeting citizens and businesses worldwide with sophisticated, hard-to-detect scams, causing operational disruptions and damaging reputations.

According to research, the global average cost of a data breach in 2024 is USD 4.88m – a 10% increase on last year and the highest total ever, underscoring the rising financial burden on organisations.

With this in mind, below we look at the innovative strategies companies can take in their fight against cybercrime, including incorporating cybersecurity into their ESG strategy, using AI wisely in cybersecurity, and aligning cyber practices with cyber regulations.

The need to incorporate ESG factors in cybersecurity

 

Across the globe, cyber threats are soaring. In particular, we are seeing an increasing number of attacks on critical infrastructure such as financial networks, healthcare and other networked systems. The problem, well despite the widespread nature of cyber threats, many companies continue to prioritise Environmental, Social, and Corporate Governance (ESG), which focuses on environmental and social issues, without giving enough attention to cybersecurity. Now, given that cyber risk is the most immediate threat to companies globally, organisations need to start considering it as part of their ESG strategy, and a failure to do so could lead to businesses being less resilient and sustainable.

So how can businesses incorporate cybersecurity into their ESG strategy? Businesses can align cybersecurity objectives with broader ESG goals. For example, if a company focuses on social responsibility, ensure that data privacy and protection measures are part of that commitment as well. Likewise, companies can include cybersecurity metrics and performance in ESG reports, as well as implement ongoing training programs at all levels.

At Exclusive Networks, we are embedding cybersecurity into our sustainability agenda, training cybersecurity experts of the future, and partnering with non-profit organisations to promote CyberESG.

Unlock the potential of artificial intelligence in cybersecurity

 

Next, AI is imperative in cybersecurity strategies, bolstering digital defences by identifying anomalies, automating routine tasks, and enabling faster threat detection and proactive responses. However, AI is both a tool and a threat. Undeniably, AI is lowering the barrier to access for cybercriminals and enhancing existing tactics, techniques, and procedures.

For instance, AI is boosting threat actors’ capabilities, enhancing their social engineering skills and increasing the effectiveness of cyber operations such as reconnaissance, phishing, and coding. According to research, the rise of generative AI-powered attacks has seen the estimated cost of cybercrime for businesses average $5.34 million annually in recovery expenses.

To strategically navigate the use of AI in cybersecurity, businesses can seek to understand AI-driven threats and employ defensive measures against them, including updating employee training programs to recognise these threats and respond appropriately. Alongside this, businesses should regularly update their knowledge on AI advancements and the emerging threat landscapes to adjust strategies accordingly, as well as continuously invest in R&D to explore innovative solutions for emerging cybersecurity challenges.

By adopting a balanced approach that considers ethical solutions and continuous learning it will help businesses navigate the complexities of AI in cybersecurity.

Align cybersecurity practices with evolving cyber regulations

 

Lastly, businesses need to align cybersecurity practices with evolving cyber regulations, to manage legal, financial, operational, and reputational risks. Keeping up with regulatory changes ensures practices reflect the best ones and incorporate the latest technological advancements.

A recent example of this, is the EU AI Act, which among other things highlighted in Article 15 that high risk AI systems should adhere to the “security by design and default principle” and that measures should be taken to protect against attacks. By regularly tracking updates from regulatory bodies and conducing internal audits, businesses will be able to maintain compliance and minimise breach-related losses.

The landscape of cybercrime in 2024 underscores the critical need for innovative and proactive strategies to combat increasingly sophisticated threats.  With the financial toll of data breaches reaching unprecedented levels, businesses must prioritise cybersecurity as a fundamental component of their operational and strategic framework. Incorporating cybersecurity into ESG strategies not only enhances security but also fosters long-term value creation and sustainability, while leveraging technologies like AI, organisations can strengthen their defences while remaining vigilant against cyber threats.

The post Innovative tactics for defeating cyber threats in 2024 appeared first on TechInformed.

]]>
24996
Paralegals in an AI world https://techinformed.com/paralegals-in-an-ai-world/ Mon, 05 Aug 2024 14:13:21 +0000 https://techinformed.com/?p=24853 AI is clearly transforming how we approach, execute, and innovate in our work. With the capacity to automate up to 40% of the average workday,… Continue reading Paralegals in an AI world

The post Paralegals in an AI world appeared first on TechInformed.

]]>
AI is clearly transforming how we approach, execute, and innovate in our work. With the capacity to automate up to 40% of the average workday, the implications for efficiency and productivity are profound, especially in sectors reliant on the processing and analysis of vast information volumes, such as the legal field.

According to McKinsey and IDC in 2022, 50% of all organisations surveyed already report using generative AI. Legal teams and firms need to find ways to align themselves with this technological upheaval to remain competitive.

Research firms are sketching a future whereby generative AI not only augments the capabilities of legal professionals but introduces new paradigms for legal practice. This begins with conversation AI and extends to entirely new AI-generated applications. The potential transformation is vast. But the rapid pace of evolution requires caution, especially when choosing foundational AI technologies to kick-off your organisation’s AI journey.

The future

 

The legal profession stands at a pivotal moment, with generative AI poised to transform the landscape of legal work fundamentally. Law firms and legal professionals are encouraged to embrace this shift, not as a distant future, but as an unfolding reality that demands engagement, exploration, and adaptation. The greatest impact will be speeding up response times across the industry (especially where summarising large swathes of information), full adoption of dynamic contract management and automation capable of updating business intelligence and responding to changing risk factors.

 According to Gartner, by 2025, 30% of enterprises will implement AI-augmented strategies. This suggests a move toward more sophisticated legal analytics and predictive modelling, enhancing precision in case outcome predictions and legal risk assessments. By 2026, the integration of AI colleagues into work processes will facilitate a collaborative model where AI assists in legal research, case preparation, and even mundane administrative tasks, enabling lawyers to focus on higher-value activities.

By 2027, the emergence of applications automatically generated by AI without human involvement will revolutionise legal software development, making custom solutions more accessible and affordable for law firms of all sizes.

For paralegals, the deployment of AI tools, will likely lead to work involving more nuanced business-led advice rather than collation and administration of data, plus there will be a focus on quality control and the optimisation and safeguarding of contractual arrangements. There’s also an opening for Paralegals to up-skill and become expert legal “prompt writers” (used for effective training and deployment of legal AI).

Prompt engineering is more than just a trend, it is a fundamental shift which is reshaping the way legal professionals interact with technology to enhance their work. Criticality, in law the stakes are high: a poorly constructed prompt could lead to misinterpretation or legal inaccuracy, whereas a well-designed prompt can provide accurate, reliable results that are critical to legal decision-making.

Consider the example of an AI tool used for contract analysis. A well-designed prompt can help the tool not only identify key clauses and terms, but also understand their implications in different legal contexts. This capability transforms tasks such as contract review, due diligence and even legal research, saving countless hours while improving accuracy.

Legal prompt engineering is a multidisciplinary field that requires a deep understanding of both law and AI. Mastering the field requires not only familiarity with legal terminology and concepts, but also an understanding of how AI models process and respond to language. This dual expertise can be challenging, but it is essential to the development of effective legal AI tools – and there is certainly an opportunity for paralegals here.

Should we be worried?

 

In most systems already in use which employ AI, the technology acts to support and improve the work of humans. Where firms are using such tech, there are signs that increasingly familiarity is not only helping them to use it effectively, but it is also overcoming any concern that the use of AI could in some way replace humans. Most surveyed workers say that AI has improved both their performance and their working conditions.

A human (paralegal) interface with AI will be essential for the foreseeable future particularly in areas such as identifying AI “hallucination” of detail in responses and verifying outputs generated. For example, Chat GPT can be prone to “hallucinations” or inaccuracies. In one example ChatGPT falsely accused an American law professor of sexual harassment and cited a non-existent Washington Post report in the process.

Chatbots are trained on a vast trove of data taken from the internet, although the sources are not available in many cases. Operating like a predictive text tool, they build a model to predict the likeliest word or sentence to come after a user’s prompt. This means factual errors are possible, but the human-seeming response can sometimes convince users that the answer is correct.

Paralegals could verify the identity of clients, catch fraudulent transactions and AI voiced phishing scams, and help identify where legal liability lies in the AI value chain: providers (creators) or deployers (users). AI cannot make these distinctions itself, Paralegals can.

Preparing for a future with AI

 

So, how can paralegals prepare for a future with AI, what do they need to consider today to be ready for tomorrow? I believe paralegals should focus on targeted applications of AI rather than novelty factors. Developing their understanding of how automation can benefit a firm or in-house team will put them in a great position to lead the AI transition. It’s worth remembering that, very often, it is the paralegals who have the best insight into the day-to-day processes required to keep a practice running effectively.

Paralegals are also well positioned to explore the opportunities that AI brings and therefore act as a vanguard of AI experts who understand where AI adoption will bring the most benefit, while also understanding the limitations and pit falls of the technology.

The post Paralegals in an AI world appeared first on TechInformed.

]]>
24853
Four cloud computing myths in Life Sciences https://techinformed.com/busting-four-myths-around-secure-cloud/ Mon, 15 Jul 2024 18:47:49 +0000 https://techinformed.com/?p=24362 Companies slow to adopt cloud computing risk falling behind in security. But they also face lagging behind rivals in terms of innovation, speed and collaboration.… Continue reading Four cloud computing myths in Life Sciences

The post Four cloud computing myths in Life Sciences appeared first on TechInformed.

]]>
Companies slow to adopt cloud computing risk falling behind in security. But they also face lagging behind rivals in terms of innovation, speed and collaboration.

Pharmaceuticals are living through a technical renaissance, with advanced techniques to measure and engineer biology unthinkable a decade ago. Advanced lab instruments, robotics and sensors are commonplace, with AI and ML deployed on massive genomics datasets for drug discovery. However, cloud adoption in pharma is still patchy.

This isn’t unique to pharma. The move to the cloud does not happen overnight, and pioneering both the tech behind cloud computing and the modern security necessary to make it a reality has been challenging.

The road was paved for accelerated innovation and success for businesses, governments and educational institutions. Back then, there were discussions around the fear, uncertainty and doubt that businesses had about cloud computing; today, those industries take cloud as a de facto standard.

A lack of data liquidity can easily hold an industry back though. It was not that many years ago in healthcare when a patient’s data was essentially bound to the physical location of a healthcare provider’s office. In order for a patient to have their data be available at another provider’s office, it had to be printed out on paper and faxed. This was time-consuming, cumbersome and prone to errors. As a technology executive, I found this baffling and the set of problems all too familiar, so I decided to lean into healthcare and healthtech.

Over the last two decades, cloud computing has matured, approaches to security have advanced considerably and data liquidity has become the expectation and the accelerant. But many commonly held beliefs about cloud computing have prevented pharma from adopting a strategy of digital transformation. So here we will bust four myths around cloud security that do not hold up to any scrutiny.

Myth 1: cloud computing isn’t as safe

 

This is probably the biggest cloud myth out there. It is perpetuated by technology vendors defending their market share and IT professionals who may be more comfortable with a server that they can see and touch. But this myth is the wrong question entirely. Cloud computing companies are heavily incentivised to make secure products because, unlike most traditional, on-premise vendors, they have to take responsibility for security.

There are three aspects of cloud computing that impact security. Firstly, vulnerability management. With automated vulnerability management, security patches come out daily, weekly and monthly in the cloud, whereas many on-premise technology vendors can take a number of months – or even years – to patch security vulnerabilities. Most on-premises technology vendors have also not invested in security engineering or secure software development to the same extent and in the same way that cloud computing vendors have.

Cloud computing vendors are hyper-focused on embedding security into the software development life cycle and being able to react quickly to any identified security vulnerabilities. Configuration monitoring is often much easier in the cloud due to the investments that cloud vendors have made in APIs, which support security and compliance monitoring.

Think automated auditing on a daily basis, which makes it possible to know the state of security with systems and data on a daily basis. This level of cross-platform, cross-system visibility is so much harder with traditional technologies due to the lack of API architectures.

Third, one of the biggest drivers behind why cloud computing’s approach to security is often better, is that cloud vendors want to make money. They understand that they must share the responsibility of security if they are to be trusted, and if they are to increase revenue. This incentivises them to make products more secure and maintain them. Out of this has arisen a modern approach to secure software development and cloud security operations. More times than not, cloud computing companies offer a product that is more secure and will be better maintained than their traditional, on-premise counterparts.

Myth 2: security is solely the responsibility of the vendor

 

The Shared Responsibility Model is one of the greatest strengths of cloud computing. Cloud vendors have a responsibility to securely develop cloud software and infrastructure so, to do this, they use automated vulnerability management, routine penetration testing, asset management, configuration management and more.

The end result is that many cloud software products undergo more security scrutiny, on a more frequent basis, than on-premise technologies do. Not all cloud products are the same when it comes to security, but it is becoming increasingly common for enterprise Software-as-a-Service (SaaS) companies to approach security in this way.

But that is not enough. It is the responsibility of each pharma, life sciences or biotech organisation to choose to configure the cloud service in a secure way. For example, making decisions around single factor authentication or multifactor authentication, choosing to enable IP range restrictions or choosing to enable role-based access controls.

The most secure cloud computing products can be configured in an insecure way, so it’s paramount that life sciences organisations work closely with cloud computing vendors to securely configure their products. The vendors will take care of the vulnerabilities, but each organisation needs to take care of the configurations.

If we take a data driven approach to this – looking at actual attacks – only 5% of recent breaches involve exploiting a vulnerability.4People talk a lot about ‘hackers’, but what the data shows us is that threat actors are more like ‘social experts’ who love to target people and single factor authentication. In fact, 82% of breaches involve the human element.

Threat actors know it is far easier to target the life sciences workforce than it is to exploit their cloud computing services and data platforms. When it comes to protecting life sciences organisations, the data suggests we should be focused much more on people and credentials than whether or not software is in the cloud. The data doesn’t show us that cloud computing is easier to hack or that on-premise technologies are safer; it shows us that humans are often the key to a threat actor’s success.

Myth 3: as more companies move to cloud, there will be more security incidents

 

It is true that as more companies adopt cloud computing, there will be more security incidents involving cloud computing – we clearly see this in investigative reports – however it doesn’t mean that the breaches are the result of cloud computing. Indeed, the vast majority of breaches involve credentials, social engineering, phishing and misconfiguration, which means organisations are likely not using the security features provided by their cloud vendors (for example, multifactor authentication, IP range restrictions, etc).

The vast majority of breaches do not involve a threat actor hacking into cloud computing companies via an application vulnerability. Again, a secure product can be used in an insecure way if we don’t pay close attention to customer-controlled configurations. The good news is that secure configurations are very easy to implement and most cloud providers will readily guide life sciences organisations through that process.

Myth 4: you can’t verify what’s happening with your data in the cloud

 

Compliance is also a reason that some organisations avoid cloud computing, but the idea that you can’t verify what’s happening with your data is untrue.

Ironically, because cloud computing is built on API architecture, most cloud vendors provide very transparent logging of who did what, when, how and from where. If an organisation wants to know who configured its cloud platform in a certain way, it’s possible to query the logs and find out.

The same is true for finding who has viewed data, uploaded data or edited data. It is often far easier to know what is happening with data, and when it is being stored or processed, with an enterprise SaaS platform, than it is when it is with disparate legacy software systems in physical data centres.

With cloud computing, and enterprise SaaS specifically, it’s possible to more easily attain a state of programmatic assurance, making compliance with various regulations far easier than having to direct our teams to perform manual reviews, manual verification and manual evidence collection for audits.

The post Four cloud computing myths in Life Sciences appeared first on TechInformed.

]]>
24362
Labour’s next steps: HealthTech, GreenTech, and Startup industry leaders weigh in https://techinformed.com/uk-election-2024-health-tech-green-tech-economy-startups-productivity/ Fri, 12 Jul 2024 08:48:12 +0000 https://techinformed.com/?p=24300 After the Labour Party’s landslide election victory, the new government promises changes across various sectors, with HealthTech, GreenTech, economic growth, and productivity at the forefront… Continue reading Labour’s next steps: HealthTech, GreenTech, and Startup industry leaders weigh in

The post Labour’s next steps: HealthTech, GreenTech, and Startup industry leaders weigh in appeared first on TechInformed.

]]>
After the Labour Party’s landslide election victory, the new government promises changes across various sectors, with HealthTech, GreenTech, economic growth, and productivity at the forefront of its transformation.

TechInformed has previously covered exactly what the winning manifesto mentioned about tech. But can Keir Starmer’s government bring a wave of innovation and support to these critical areas and position the UK as a global leader in technology and sustainability?

We collected insights from industry leaders to find out what they think.

 

HealthTech

 

Increased funding, supportive regulations, and encouragement for HealthTech startups will keep the UK competitive and attractive, fostering investment and innovation in the sector, say voices from the sector.

 

“The biggest challenge our startup landscape is facing is the exodus of businesses from the UK to the US, with 24% of UK HealthTech SMEs preferring to launch in the US rather than the UK.

“Silicon Valley’s allure is in part due to the funding available — US financial schemes through agencies such as DARPA and NASA have been instrumental in its growth. However, we don’t just need more money from the Government but also from investors.

“Despite the UK seeing the third-most HealthTech investment globally, the US is seen as a markedly more attractive market for startups. In addition to funding, this is because of a more supportive regulatory environment — 46% of HealthTech companies have removed products from the UK market due to regulatory uncertainty.

“HealthTech today in the UK is comparable to Fintech over a decade ago, and regulations such as Open Banking and a regulatory sandbox facilitated the UK’s becoming a world leader in Fintech.

“This is something the government has not yet addressed, and, with Labour’s calls for a more digital, interconnected NHS, they will need to look at regulations that facilitate competition, collaboration, and interoperability to accelerate the UK’s economy and create a more favourable environment for startups.”

Santosh Sahu, CEO & founder, Charac

Santosh Tahu, CEO & Founder, Charac, HealthTech
Santosh Tahu, CEO & founder, Charac

 

GreenTech

 

Across the sustainability space, while there is support for Labour’s pledge to decarbonise the power system by 2030, there is a collective desire for urgent, comprehensive, and sustained actions in various sectors to achieve net zero goals while addressing economic and social needs.

Industry leaders address the importance of a regulatory body for carbon accounting, shared concerns over releasing lower-grade green belt land for development, developing a national, affordable energy infrastructure, and innovative urban planning that integrates natural landscapes.

“We need to see the Labour Government take clear, tangible steps to demonstrate its recognition of net zero as the greatest commercial opportunity of our time. There is a lot to do in little time, so prioritisation is key. Importantly, the new government must resist the urge to end and alter existing processes and departments established by the previous Conservative Government that work or showed promise.

“However, there are areas which the Labour Government does need to change. For a start, they must deliver on their manifesto pledge to reverse the damaging policies the previous Government placed, for example, restricting the Bank of England from considering climate change in its mandates.

“Establishing Great British Energy grabs headlines, but equally important are improvements that reduce the red tape and planning restrictions surrounding building green energy infrastructure such as offshore wind farms and electric car charging terminals, reducing grid connection waiting times.

“And then there is data. If businesses are to create accurate and realistic net zero plans, they need access to vital data like energy, water, and waste usage. Currently, this type of data is often held by commercial landlords, who are under no obligation to share it with their tenants.

“In the longer term, the Labour Government needs to create a regulatory body for carbon accounting. Just as the Financial Conduct Authority (FCA) regulates the financial sector, we need an overarching governing body for carbon accounting to ensure consistency, provide guidance, and hold organisations accountable if we are to reach net zero by 2050.”

Andrew Griffiths, director of policy & corporate development, Planet Mark

 

Economy & Tax

 

Labour is urged to support the tech sector by promoting London as a global hub, fostering regional tech growth, and attracting international tech companies. Concerns include potential capital gains tax increases, but optimism exists for Labour’s promises on tech investment and R&D.

Taxwise, there’s also a call for Labour to invest in HMRC skills to improve customer service and clarify anti-avoidance rules to protect innocent taxpayers.

 

“Tech has been recognised as a core pillar of the economy, so for Labour to instil real change for the sector, it needs to first, reaffirm London as a global tech and fintech hub. Promote foreign investment, address local skill gaps and ensure the capital will comfortably remain Europe’s capital for tech innovation.

“Second, look beyond London to support innovation happening across the country and ensure these companies can effectively scale up. Use the tech sector to lead on regional regeneration, particularly in the North. Manchester and Leeds have a thriving community of tech businesses. Any attempt to tap into the potential of the North must involve tech.

“Third, attract international tech companies to set up in the UK, whether for investment, product launches or a general expansion. There is significant interest in the UK from fintechs across Europe, the US, MENA, and Asia — this must be capitalised on.”

Rhys Merrett, head of tech PR, The PHA Group 

 

“All political parties have promised to make the UK a tech powerhouse, yet the last few years have presented tech founders with serious challenges — an uncertain economic environment has hampered M&A activity, and an increasingly tough narrative on immigration has made it all but impossible to recruit the best overseas talent.

“Concerns surrounding a possible increase to capital gains tax by the Labour government are worrying, but we believe tech founders will welcome the new Labour government.

“It has made strong promises to commit significant investment in clean and environmental tech, it is offering clarity around R&D, and to leave corporation tax and personal taxes unchanged. And it is promising a commitment to securing solid trade deals in key service sectors.”

Simon Wax, partner, Tech & Media at Buzzacott

 

“The HMRC customer service crisis is going to need proper investment to fix, and both the two main parties have overlooked that in their manifestos. Phone calls to HMRC now take an average of 23 minutes to answer — taxpayers deserve better.

“There remains huge uncertainty over whether making tax digital will happen. The new Government should make a clear statement on this so the tax industry and the self-employed can properly plan.”

Andrew Snowdon, chairman, UHY National Tax Group

Startups

 

Despite having more “unicorn” startups than Germany, France, and Sweden combined, UK scale-ups struggle to secure investment, and many entrepreneurs move overseas once they reach a certain size. There are calls for Labour to develop innovative funding solutions to retain and support high-growth tech startups.

 

“One of the biggest frustrations for ambitious, pioneering tech start-ups that we work with is securing the necessary investment to scale. Time and again, we see tech entrepreneurs growing their businesses to a certain size only to struggle to get the investment they need to fuel the next stage of their growth; all too often, they end up taking their business overseas to secure its future.

“In 2022, the UK was only the third country in the world to have a tech sector valued at $1 trillion, with more ‘unicorn’ billion-dollar tech startups being created than Germany, France and Sweden combined. Tech visionaries and entrepreneurs are vital to our country’s future success—not just in revenue terms but also in enabling us to keep ahead in a rapidly evolving landscape.

“We’re calling on Keir Starmer and the new Labour government to find imaginative solutions to the funding challenges that rapid-growth tech startups face, so their drive, vision, expertise and wealth creation stay in the UK.”

Rob Borley, CEO, Dootrix

 

Productivity

 

It is suggested that investing in infrastructure and industrial strategies can drive economic growth and that addressing inefficiencies in businesses could accelerate digital transformation and increase productivity to achieve economic goals.

 

“The new Government needs to focus more immediately on removing inefficiencies within UK businesses, which are weighing down both the private and public sectors.

 

Rupal Karia, country leader UK&I, Celonis, Productivity
Rupal Karia, country leader UK&I, Celonis

 

“Process Intelligence can make this a reality, providing organisations with data-based methods of generating positive impact at the top, the bottom, and the green line.

“Delivering fast growth is tough, but in the meantime, businesses can become leaner and more agile, gaining maximum value within their current processes. This allows greater efficiency, increases productivity, and accelerates digital transformation — all of which will help Labour in achieving its economic goals.”

Rupal Karia, country leader UK&I, Celonis

 

For more tech-oriented coverage of elections around the world, check out our dedicated hub to the Year or Elections.

The post Labour’s next steps: HealthTech, GreenTech, and Startup industry leaders weigh in appeared first on TechInformed.

]]>
24300
Labour’s next steps: Cyber security, AI, & Open-Source industry leaders weigh in https://techinformed.com/labour-promises-tech-insights-cybersecurity-ai-open-source/ Thu, 11 Jul 2024 15:23:32 +0000 https://techinformed.com/?p=24299 Following the recent change in government in the UK and the Labour Party’s landslide victory, a promise of change is on the horizon. The Labour… Continue reading Labour’s next steps: Cyber security, AI, & Open-Source industry leaders weigh in

The post Labour’s next steps: Cyber security, AI, & Open-Source industry leaders weigh in appeared first on TechInformed.

]]>
Following the recent change in government in the UK and the Labour Party’s landslide victory, a promise of change is on the horizon.

The Labour Party’s manifesto mentioned ‘technology’ and ‘innovation’ more frequently than any other party, suggesting that these will be central to the government’s efforts to enhance public services, boost productivity, and revitalise the UK economy.

We’ve previously discussed the promises made in the Labour manifesto as they pertain to technology in various sectors. But what are the perspectives of industry leaders on the future of UK tech policy and its potential impact on businesses?

TechInformed has gathered insights from Cybersecurity, AI, and Open-Source leaders to provide a comprehensive view of the industry’s positions.

Cybersecurity & Online Safety

 

The election has been criticised for neglecting cybersecurity, with the industry urging the new government to prioritise cybersecurity through strong legislation, proactive strategies, and securing critical infrastructure. There are also calls to swiftly implement and enforce the Online Safety Act to protect individuals and balance digital protections with free expression and privacy rights. 

 

“With recent high-profile attacks on the NHS and MoD highlighting critical gaps in national security, the new leaders must play their part in ensuring that cybersecurity is a boardroom priority in all organisations with accountable outcomes, given that the UK is at high risk of a “catastrophic ransomware attack.

“Cyber security efforts have remained stagnant even as threats rise, with 43 legacy systems at critical risk levels this year alone. The new government must take decisive action and hold all businesses accountable for improving the UK’s level of cyber preparedness through more robust and comprehensive legislation that ensures cyber security is taken more seriously.

“Government must advocate for building cyber resilience through proactive strategies, secure-by-design principles, and visibility into everything that is coming in and out of an organisation, including encrypted data. They must also lead by example, taking steps to secure the public sector itself, especially critical national infrastructure, as the traditional IT and security strategies underpinning these organisations are no longer sufficient for the extent of today’s sophisticated threats.”

Mark Coates, VP EMEA, Gigamon

 

“Details from the Labour Party have been minimal. However, what we do know from their manifesto is that they recognise the threat to our safety and security. They specifically call out the growing emergence of hybrid warfare, including cyberattacks and misinformation campaigns which seek to subvert our democracy.

“Labour proposes to tackle this by conducting a Strategic Defence Review. This will happen within Labour’s first year in government, and their manifesto states that it will set out the path to spending 2.5% of GDP on defence.

“I urge Sir Keir and the Labour Party to speak with a broad spectrum of people across the cyber security industry, including those at the front line of law enforcement activities. The reality of the problems and the needs of the UK must be seen and addressed in this review.”

Adam Pilton, Cybersecurity  consultant, CyberSmart

 

“For all the election noise, cyber security was absent. In a way, this is understandable; there are many other social and economic issues to focus on when trying to woo voters. But as the dust settles on this election, continuing to overlook cyber security would be a grave mistake.

“The electoral commission: hacked. NHS hospitals: hacked. Countless UK businesses: hacked. How many attacks are too many? With Labour coming into power for the first time in 14 years, a comprehensive strategy to strengthen the UK’s cyber defences is urgently needed.

“The EU is implementing the NIS2 directive. Why does the UK lag in securing its digital infrastructure? It’s time for the government to wake up, smell the coffee and develop a plan to change this.”

 Al Lakhani, CEO, IDEE

 

“With the appointment of Peter Kyle as Secretary of State for Science, Innovation and Technology, it’s a vital time for Labour to reaffirm its commitment to online safety. The Online Safety Act, which Labour supported, has enabled the UK to lead the world in this space and set the direction for online platforms to make concrete changes that keep people safe.

“The new government must ensure that the Act is not only implemented swiftly but also enforced robustly to hold tech companies accountable. Keeping up the pace here will be crucial to tackling some of our biggest societal problems, such as protecting children and other vulnerable people from age-inappropriate, harmful, and illegal content. Child Sexual Abuse Material (CSAM) and fast-developing AI-generated harms like deepfakes and nonconsensual explicit content also demand urgent attention.

“While the focus is often on ‘Big Social’ regarding online safety in the media, we hope to see more focus on other user-to-user platforms, including video games, chat apps, and streaming services. Platforms must be held responsible for the content posted by their users to create safer online communities.”

Andy Lulham, COO, VerifyMy

Andy Lulham, COO, VerifyMy — Cybersecurity and Online Safety
Andy Lulham, COO, VerifyMy

 

Open Source

 

According to leaders in the field, the critical role of open-source technology in driving economic growth, enhancing public sector efficiency, and maintaining technological leadership calls for strategic government support and investment.

 

“Change must not only start now but must be digital. Only a fundamental shift in our digital policies and practices can impact the lives of every individual across the UK.

“This can be made possible by leading with digital funding the development of the right skills in open-source software. Leveraging a globally visible living CV created by open-source contribution will offer individuals who can currently code but have no employment experience the opportunity to be employed by global tech companies and hired as home workers with a proven track record of contribution.

“We should remember that these are employers who recruit based on skills, not location. In this way rurally based individuals can have international jobs, stemming talent flight, injecting international salaries into the UK economy whilst building our future tech sector.

“With 96% of software codebases having open-source software dependencies today, the public sector must learn how to manage open-source properly. Only this change allows interoperability that can open data flows between systems, unlock efficiency, and break patient and practitioner frustration in the NHS. Our new government owes the NHS this change.”

Amanda Brock, CEO, OpenUK

Amanda Brock, OpenUK, Open Source
Amanda Brock, CEO, OpenUK

 

AI & Regulation

 

Leaders in the AI space stress the need for AI openness to prevent centralised control, urging the new government to learn from past technological developments. They emphasise tech investment, calling for the appointment of  Chief AI Officers in government departments and creating an AI fund to foster public-private innovation while ensuring privacy through synthetic data.

Industry-specific regulations, especially for healthcare and pharmaceuticals, are highlighted, alongside the need for a dedicated office to ensure diverse policy input. There’s also a strong call for robust AI processes to mitigate risks, ethical AI use, transparent policies, and continuous compliance to protect data and maintain public trust.

 

“AI will have an impact in the coming months and years like the internet in the last 20. But this time, everyone knows how the game plays out. We know the risk today is that AI ends up controlled by the hands of a few.

“This time, our new leaders must learn from the recent past. History will not be forgiving if they do not. To protect the UK’s AI leadership, Labour must look to open AI wherever possible. But it must do this with a considered understanding of what that means to open each component that makes up, from models to data, and what it means to be partially or fully open.

“It’s complex, yes, but we expect our leaders to be able to understand complex tasks and to cut through the distraction of the noise created by those who can shout loudest. The biggest risk the UK faces from AI today is that our leaders fail to learn the lessons of the last 20 years of tech and do not enable AI openness. Only Labour can bring this change.”

Amanda Brock, CEO, OpenUK

 

“It is crucial the new government places an emphasis on tech investment, particularly around AI, which will be paramount to streamlining services and enhancing citizens’ lives.

“We expect to see Chief AI Officers hired across government departments to ensure AI underlines the priorities in all the parties’ manifestos, while a foundational data strategy with governance at its core will help meet AI goals.

“An AI fund can also help promote public-private innovations and enable the commercialisation of data and assets globally through synthetic data. This approach would offer a unique opportunity to unlock value from data whilst maintaining robust privacy protections, as synthetic data can mimic real-world information without exposing sensitive personal details.

“Regarding AI regulation, it would be beneficial to establish industry-specific rules, with particular attention paid to sectors like healthcare and pharmaceuticals and their unique needs. For the pharmaceutical industry, in particular, there needs to be more robust agreements established on the use of medical data, with internal investment to manage and protect this data. This could include shared profits or IP rights provisions when companies benefit from UK resources.

“A dedicated office to oversee these initiatives would help to ensure that diverse voices are heard in shaping data and AI policies. These steps will be crucial for the new government to support data-driven industries and ensure they can capitalise on AI, thus positioning the UK as a global innovation powerhouse whilst ensuring sustainable growth and protecting national interests.”

James Hall, VP & country manager UK&I, Snowflake

 

“Labour’s promise to introduce “binding regulation” for AI safety will create ripple effects across the UK private and public sectors. And while stricter regulation for major AI firms is planned, organisations leaning on these emerging technologies will have to scrutinise their AI strategy here and now.

“With Labour’s wider review on the misuse of AI for harmful purposes, companies need to telegraph they are mitigating risk with AI. Both ‘good AI’ and ‘bad AI’ exist, and combatting threats from bad AI is critical in an increasing risk environment, as over half (59%) of IT leaders say that customer-impacting incidents have increased, growing by an average of 43% in the last 12 months.

Eduardo Crespo, VP EMEA, PagerDuty, AI Regulation
Eduardo Crespo, VP EMEA, PagerDuty

 

“In light of regulation pressures and mounting risk factors, companies need to establish watertight AI processes and mechanisms to ensure the ethical use of AI; how are external AI threats being tackled? How are internal hygiene processes with AI protecting customers? CIOs and DPOs face a big set of tasks involving sticking close to regulators, sharing rigorous policy documentation publicly, and implementing clear and transparent network policies on data collection and information security.

“Compliance is a 24/7 job, and dropping the ball on this, with regards to areas like data protection, could result in hefty fines and loss of trust.”

Eduardo Crespo, VP EMEA, PagerDuty

 

For more tech-oriented coverage of elections around the world, check out our dedicated hub to the Year of Elections.

The post Labour’s next steps: Cyber security, AI, & Open-Source industry leaders weigh in appeared first on TechInformed.

]]>
24299