AI Archives - TechInformed https://techinformed.com/tag/ai/ The frontier of tech news Mon, 09 Sep 2024 14:25:43 +0000 en-US hourly 1 https://i0.wp.com/techinformed.com/wp-content/uploads/2021/12/logo.jpg?fit=32%2C32&ssl=1 AI Archives - TechInformed https://techinformed.com/tag/ai/ 32 32 195600020 Who needs interns, when you have AI? https://techinformed.com/who-needs-interns-when-you-have-ai/ Mon, 09 Sep 2024 14:25:43 +0000 https://techinformed.com/?p=25675 GenAI tools like ChatGPT and Gemini have created endless excitement in the tech world as their potential to transform working lives continues to be explored.… Continue reading Who needs interns, when you have AI?

The post Who needs interns, when you have AI? appeared first on TechInformed.

]]>
GenAI tools like ChatGPT and Gemini have created endless excitement in the tech world as their potential to transform working lives continues to be explored. However, there have been equal concerns over what happens when these models malfunction. Examples of AI getting things wrong are already making the headlines, including incidents at Air Canada and Google.

Quirks like these demonstrate how important it is for organisations to weigh up the strengths and weaknesses of AI before applying it to any aspect of business. Without forethought, companies risk embarrassing or even disastrous consequences.

At the same time, it should be appreciated that GenAI is only in its infancy, yet to mature to its full-blown potential over the next few years. However, even in these early stages of development, it still has many strengths, provided implementations are thought through carefully.

AI strengths

 

The tax industry is a prime example of what’s already feasible with the current iterations.

Looking on the positive side, AI works day in, day out, never getting tired or stressed, carrying out tasks at a vast scale. It is extremely efficient at the mundane and repetitive jobs that people tend to generally dislike and are highly time-consuming.

Take a task like analysing ledger data for VAT purposes; in some instances this can be millions of rows. There aren’t enough interns you could throw at the task of reviewing every row, yet AI can analyse this kind of dataset in seconds – making it the ‘infinite intern’.

Similarly, fast automation of routine tasks, like data entry, number-crunching and anomaly detection, are a piece of cake for AI. Well-suited to these types of activities, AI churns through data processing quickly, constantly, and reliably.

Its consistency is a key strength. Unlike employees, who may not always be objective, AI algorithms stick to the rules, applying them in the same way on each occasion. Whereas evidence shows an individual’s decision-making abilities and performance can vary significantly owing to factors such as hunger, fatigue, workload, and stress.

Even the time of day can make a difference to someone’s reasoning powers, as highlighted by a recent study. It found workers are less active and more prone to making mistakes on afternoons and Fridays, with Friday afternoon representing the lowest productivity point.

AI can also extract valuable insights from huge volumes of disparate tax and financial data that would take a person days or weeks to compile and interpret. Predictive analytics powered by AI can forecast trends, model different outcomes based on complex tax scenarios, and uncover potential compliance issues.

On this basis, AI sounds like a compelling choice for routine work and mass data crunching, at the very least. However, it’s not all plain sailing as AI tools are only as good as the training they receive. It’s a case of garbage in, garbage out.

AI weaknesses

 

If the data used to inform an algorithm is inaccurate, this will detrimentally affect the results it provides. This is how errors occur and biases creep in, where outcomes are at best misleading or, at worst, completely wrong.

They can also lack the capability to interpret important context, and miss subtleties which humans easily take into account. The end result can be spurious responses and hallucinations, where AI misinterprets data and fabricates answers. Fortunately, these issues can be rectified as AI does respond well to constant training, but this can take time.

It boils down to having the right monitoring, evaluation, and re-training in operation. AI tools shouldn’t be left to act on their own without proper oversight, and outputs should be sanity checked by humans.

The future for interns

 

So, what does the rise of AI mean for interns?

For tech-savvy generations, like Gen Z, the future for interns in the tax industry looks bright. Having grown up using technology throughout their lives, they expect to find technical innovation in the workplace. Indeed, many consider it a must-have when choosing a career path. The finance and tax industry has a massive opportunity to tap into this mindset to encourage new talent into the industry.

By harnessing AI to do the mundane work as the ultimate ‘infinite intern’, it can support new graduates and tax assistants rather than replacing them. Instead of spending most of the early part of their careers on traditionally laborious work, human interns will check and review information already processed and analysed by AI.

Thus, freeing up time to hone their accountancy skills more quickly. And, then use the insights that AI uncovers for more satisfying work, usually only possible much later in their tenure, such as strategic planning, problem solving and value-added decision-making for the business or its clients.

GenAI will continue to improve. But, for the time being, assuming it can mimic the expertise of a senior level decision maker is asking for trouble. However, if deployed with due diligence, AI can bring much needed efficiency and valuable insights to financial data processing. It will help to attract a forward-thinking generation of tax professionals looking for careers that champion technical innovation and new ways of working.

The post Who needs interns, when you have AI? appeared first on TechInformed.

]]>
25675
AI’s role in the autonomous enterprise https://techinformed.com/ais-role-in-the-autonomous-enterprise/ Fri, 06 Sep 2024 10:55:12 +0000 https://techinformed.com/?p=25660 As businesses evolve following last year’s surge in AI and automation, the autonomous enterprise concept is emerging as the next major leap. Experts claim that… Continue reading AI’s role in the autonomous enterprise

The post AI’s role in the autonomous enterprise appeared first on TechInformed.

]]>
As businesses evolve following last year’s surge in AI and automation, the autonomous enterprise concept is emerging as the next major leap.

Experts claim that mixing artificial intelligence and automation may offer enterprises a future where technology can self-diagnose and solve issues without human intervention, reducing potential system downtime and boosting productivity.

The concept of an autonomous enterprise sees AI-driven systems manage tasks like predictive maintenance, allowing employees to focus their skills on innovation over troubleshooting. These systems operate in real-time, leading to fewer disruptions and enabling seamless operations across all departments.

Given GenAI’s momentum in recent years, are we anywhere nearer to seeing true autonomous enterprises?

According to Akhilesh Tripathi, CEO and founder of automation vendor Digitate, we are approaching a key moment in the development of AI that will see much more automation across the enterprise sector.

“When we started Digitate, we recognised that, in most large organisations, automation was siloed — it sits within its own island,” he explains. “We found these islands exist because automation doesn’t scale.”

In other words, automation for individual tasks or processes worked, but once additional complexity was introduced, most AI and automation platforms would fail or struggle.

The problem, of course, with having automation but only operating in silos is that it isn’t really automation because businesses still need someone or something to connect each of the processes.

And it is AI itself that can offer a solution, says Tripathi.

Proactive

 

Digitate was launched in 2015 as part of Tata Consultancy Services. It initially offered its Ignio suite of services, which aims to automate enterprise operations.

Tripathi is a TCS veteran, having worked for the Indian giant for more than two decades and rising to head up its Canadian unit. He assumed the chief commercial officer of Digitate at launch and became CEO in 2020.

“Tata has been working on automation and AI since the 1980s. At one point, I worked on a project where we developed a way to automate the delivery of coolant for a water plant.

“As we got more into it, it became very clear that this sort of process automation could be transformative from an enterprise standpoint, but you need to put it directly in the hands of the enterprises so they can maximise its value.”

Digitate has already worked with several large enterprises to help them join up automated processes and deliver AI-powered services.

Avis

 

This includes a project with car rental firm Avis, which was facing a situation that had left its IT and support teams constantly firefighting and manually resolving issues, as well as several other challenges.

Avis engaged in an organisation-wide digital transformation project to move from manual and reactive operations across its 2,900 offices spanning 112 countries to an autonomous and predictive one.

At the time, the rental firm was using a third-party monitoring tool to monitor business-critical applications, but it had suffered availability issues caused by server-level problems, resulting in missed critical alerts.

To overcome this, Avis approached Digitate to implement a solution that would monitor and manage the availability of a third-party monitoring tool. Its Ignio AI platform allowed Avis to monitor any server-side issues, and whenever one arose, the platform conducted a root-cause analysis. It would then triage the issue automatically and perform ‘self-heal’ functions where possible.

Digitate also worked with Avis to reduce downtime of critical applications, including its booking tool for customers. Ignio monitored an Oracle database and functional attributes of a CMS system linked to the applications to isolate issues. It then drilled down further into the application layer, web layer, and database layer to triage issues and proactively fix them.

Overall, Ignio has managed more than 176,372 requests to date, leading to a 68.6% reduction in noise and 99.9% uptime for in-scope critical applications. Around 60% of detected incidents were resolved automatically by the platform.

“We love seeing innovation happen in areas that have been pain points for us for years. This saves us a ton of time and has dramatically improved our compliance,” said Avis in a customer testimonial.

Data-day AI

 

The Ignio platform uses generative AI to assess data points produced by existing operations, then predict potential problems and, where possible, solve them before they need human attention. If it cannot resolve them, it can flag problems earlier, reducing downtime.

Data hygiene is one of the significant challenges facing any enterprise looking to automate processes. If the data used by analytics tools such as Ignio is not clean, its effectiveness will be reduced. However, many companies are using reems of legacy data that are not clean, which is embedded in the processes they are looking to automate.

Tripathi acknowledges this challenge but says AI can be used to recognise duplicity or anomalies within data sets.

“We will have both the logs and information from a sensor, so that helps us to make sense of those processes and survey what is good data and what is not,” he explains.

“We can also present this back to the enterprises so they can start the process of cleaning up their datasets internally, which also helps automate processes in the long run.”

The platform can also detect what is classed as “normal” performance from processes and devices in what Tripathi calls an enterprise contextual blueprint.

“This is dynamic – it is constantly updating,” he adds. “But we can know what ‘Monday morning normal’ is compared to other days and reverse populate that.”

Engie

 

Another Digitate customer, energy provider Engie, generated around 150,000 bills for its 12+ million customers every day.

“Even a minor percentage of problems with billing or invoicing leads to a huge impact, resulting in customer dissatisfaction, handling front desk manual corrections, and piles of unbilled revenue,” says Tripathi.

In some ways, technology made this worse. The introduction of smart meters led to a higher need to correct meter readings, negatively impacting customer satisfaction.

Engie turned to Digitate to help it reduce the generation of incorrect bills and invoicing, reduce revenue realisation loss caused by backlogs, and improve customer satisfaction.

Ignio was integrated with an Oracle database to conduct the automatic execution of service requests with scheduling while identifying and correcting erroneous data in SAP.

This led to more correct meter readings and billing, which in turn led to fewer erroneous bills and examples of double billing. Digitate also helped Engie automate more of its call centre functions to improve customer service.

Stats-wise, this involved more than 4,000 batch jobs that were monitored autonomously. On the finance side, payment files worth 2.5 million were integrated without delay, and monitoring improved system stability by 30%, according to Digitate.

The AI equation

 

Confidence in AI systems is on the rise, and according to Tripathi, this means that elements of automation have now gone mainstream. However, with it have also been some warnings, including several business leaders who warned of the threat newer AI models could pose to humanity.

Tripathi believes AI will make humans “appear more intelligent” because users will be able to extract more insights from business processes and incorporate them into discussions.

He argues that when mixed with automation, AI can “simplify conversations and accelerate problem resolution.”

“If you strengthen that relationship, businesses will see huge advantages. Leaders can better understand what is going on in their business and visualise challenges, helping to build more support for the most complex problems that automated systems can’t overcome alone,” he adds.

He concludes: “In my view, GenAI plus human is better than just a human. But GenAI plus automation AI, plus a human, is better than GenAI. We are big believers in augmenting intelligence – it is never about replacing it.”

The post AI’s role in the autonomous enterprise appeared first on TechInformed.

]]>
25660
Black Hat USA 2024: Eight ways to achieve ‘Secure by Design’ AI https://techinformed.com/black-hat-usa-2024-eight-ways-to-achieve-secure-by-design-ai/ Fri, 06 Sep 2024 09:40:50 +0000 https://techinformed.com/?p=25635 Balancing the need to innovate and develop at speed with the need for security is keeping many cyber folks awake at night, or at least… Continue reading Black Hat USA 2024: Eight ways to achieve ‘Secure by Design’ AI

The post Black Hat USA 2024: Eight ways to achieve ‘Secure by Design’ AI appeared first on TechInformed.

]]>
Balancing the need to innovate and develop at speed with the need for security is keeping many cyber folks awake at night, or at least it was preying on the minds of the speakers who addressed Black Hat’s inaugural AI Summit, which took place in Las Vegas last month.

Occurring just a couple of weeks after the global CrowdStrike IT outage, which ground airports to a halt and forced medical facilities to resort to pen and paper, it felt the right time to reflect as firms find themselves under pressure to adopt AI  faster and release products before they are properly evaluated.

Lisa Einstein, senior AI advisor at the US Cybersecurity and Infrastructure Security Agency (CISA), compared what she called “the AI gold rush” to previous generations of software vulnerabilities that were shipped to market without security in mind.

Global IT Outage: BSOD at airports
CrowdStrike outage: Failure in the design and implementation process had a global impact

 

“We see people not being fully clear about how security implications are brought in. With the CrowdStrike incident, no malicious actors were involved, but there was a failure in the design and implementation that impacted people globally.

“We need the developers of these systems to treat safety, security and reliability as a core business priority,” she added.

The Internet Security Alliance’s (ISA) president and CEO, Larry Clinton, put it more bluntly: “Speed kills — today we’re all about getting the product to market quickly — and that’s a recipe for disaster in terms of AI.”

He added: “Fundamentally, we need to reorientate the whole business model of IT, which is ‘Get to market quick and patch’. We need to move to a ‘Secure by Design’ model and to work with government partners so we are competitive and secure.”

Many of the event’s sessions, which featured speakers from WTT, Microsoft, CISA, Nvidia, as well as the CIA’s first chief technology officer, were focussed on how organisations might achieve ‘Secure by Design’ AI, which TechInformed has summarised in eight key takeaways.

1. Do the basics and do them well

 

“You can’t forget the basics,” stressed veteran CIA agent Bob Flores during one of the event’s panel sessions. “You have to test systems and applications and the connections between the applications, and you have to understand what your environment looks like,” he added.

Flores, who, towards the end of his CIA career, spent three years as the agency’s first enterprise chief technology officer, asked Black Hat’s AI conference delegates: “How many of you out there have machines that are attached to the internet that you don’t know about? Everyone’s got one, right?”

He also warned that, with AI, understanding what’s in your network needs to happen fast “because the bad guys are getting faster. They can overcome everything you put in place.”

And while enterprises might think it’s safer to develop their own LLMs rather than to rely on internet-accessible chatbots such as ChatGPT, Flores is concerned that they might not be building in security from the beginning. “It’s still an afterthought. As you build these LLMs, you must think, every step of the way, like a bad guy and wonder if you can get into this thing and exploit it.”

2. Architect it out

 

Bartley Richardson, cybersecurity AI lead at GPU giant NVIDIA, advised the Black Hat crowd to look at AI safety from an engineering perspective.

“When you put together an LLM application, don’t just look at every block you’ve architected there; look at the connections between those blocks and ask: ‘Am I doing the best possible security at each of those stages?’ ‘Is my model encrypted at rest?’ Are you putting safeguards in place for your prompt injections?’ This is all Security by Design. When you architect it out, these things become apparent, and you have these feedback loops where you need to put in security,” he explained.

3. Create a safe space to experiment

 

Matt Martin, founder of US cyber consulting firm Two Candlesticks and an AI Security Council member for Black Hat, advised that creating a controlled sandbox environment within which employees can experiment was important. “A lot of people want to use AI, but they don’t know what they want to do with it just yet – so giving them a safe space to do that can mitigate risk,” he said.

Martin added that it was important to understand the business context and how it was going to be applied. “Ensure someone in the company is in overall control of the projects. Otherwise, you’ll end up with 15 different AI projects that you can’t actually control and don’t have the budget for.”

4. Red team your products  

 

Brandon Dixon, AI partner strategist at Microsoft, explained how the software giant is balancing advances in AI with security. “We’ve done that through the formation of a deployment safety board that looks at every GenAI feature that we’ve deployed and attaching a red teaming process to it before it reaches our customers,” he says.

Red teaming is an attack technique used in cybersecurity to test how an organisation would respond to a genuine cyber-attack.

Check out our healthcare cybersecurity tabletop coverage here

“We’ve also formed very comprehensive guidance around responsible AI both internally and externally, consulting experts, which has enabled us to balance moving very quickly from the product side in a way that doesn’t surprise customers,” he added.

5. Partnerships are paramount

 

According to CISA’s Lisa Einstein, ‘Secure by Design’ relies on public and private enterprise partnerships. She added that this is particularly important in terms of sectors that provide critical infrastructure.

To this end, in 2021, CISA established the Joint Cyber Defense Collaborative (JCDC). This public-private partnership aims to reduce cyber risk to the nation by combining the capabilities of the federal government with private sector innovation and insight.

Einstein told conference delegates: “CISA only succeeds through partnerships because more than 80% of critical infrastructure is in the private sector in the US.

“We have a collective and shared responsibility. I’m seeing organisations that didn’t think they were part of this ecosystem, not realising that they have part of the responsibility. Tech providers also need to help these enterprises become more secure and keep everything safe,” she said.

Partnerships with and between vendors were also emphasised at the event. Jim Kavanaugh, longtime CEO and technology guru of $20 billion IT powerhouse World Wide Technology, spoke on the benefits of the firm’s long-term partnership with chipmaker Nvidia, including advances with AI.

In March this year, WWT committed $500 million over the next three years to spur AI development and customer adoption. The investment includes a new AI-proving ground lab environment and a collaboration ecosystem that uses tools from partners, including Nvidia.

While former CIA agent Flores recognised that such partnerships were crucial,  he also stressed the need for firms to conduct robust assessments before onboarding.

“Every one of your vendors is a partner for success, but there are also vulnerabilities. They must be able to secure their systems, and you must be able to secure yours. And together, you must secure whatever links them,” he noted.

6. Appoint an AI officer

 

The conference noted the rise of the chief AI officer, who oversees the safe implementation of AI in organisations. This appointment is now mandatory for some US government agencies following the Biden Administration’s Executive Order on the Safe, Secure and Trustworthy Development and Use of AI.

These execs are required to evaluate different ways to deploy robust processes for evaluating use cases and AI governance.

While it was not a requirement for CISA to appoint a chief AI officer, Lisa Einstein stepped up to the role last month as the organisation recognised that it was important to its mission beyond having an internal AI use case lead.

CISA wanted someone responsible for coordinating those efforts to ensure we were all going in the same direction with a technically sound perspective and to make sure that the work we’re doing internally and the advice we are giving externally is aligned so that we can adapt and be nimble, “she explained.

While this doesn’t have to be a board-level appointment, Einstein added that the person needs to be in the room with an ever-expanding roster of C-Suit players: the CIO, the CSO, the legal and privacy teams, and the data officers when decisions and policies on AI are made.

Einstein added that, within ten years, the position should be redundant if she’s done her job well. “By then, what we do should be so ingrained in us that we won’t need the role anymore. It would be like employing a chief electricity officer. Everyone understands the role they must play and their shared responsibility for securing AI systems and using them responsibly.”

7. Weave AI into your business operations

 

For ISA chief Larry Clinton, Secure by Design starts with theory. For over a decade, his organisation has collaborated with the US National Association of Corporate Directors (NACD), the US Departments of Homeland Security, and the Board of Direct Justice on an annual handbook for corporate boards to analyse cyber risk.

According to Clinton, ISA is currently developing a version of this handbook specifically for working with AI, which will be released this fall.

Clinton claimed that enterprises need to bring three core issues to the board level.

“AI deployment needs to be done strategically. Organisations underestimate risks associated with AI and overestimate the ability of staff to manage those risks. This comes from an idiosyncratic adaptation of AI, which needs to be woven into the full process of business operations, not just added on independently to various projects,” he says.

The second issue, he said, was education and the need to explain AI impacts to board members rather than explaining the nuts and bolts of how various AI deployments work.

The third issue, he added, was communication. “It’s critical that we move AI out of the IT bubble and make it part of the entire organisation. This is exactly the same advice we give with respect to cybersecurity. AI is an enterprise-wide function, not an IT function.”

8. Limiting functionality mitigates risk

 

According to Microsoft’s Brandon Dixon, limiting the actions that an AI system is capable of is well within a human’s control and should, at times, be acted upon. The computer giant has done this with many of its first-generation copilot tools, he added.

“What we’ve implemented today is a lot of ‘read-only’ operations. There aren’t a lot of AI systems that are automatically acting on behalf of the user to isolate systems. And I think that’s an important distinction to make — because risk comes in when AI automatically does things that a human might do when it may not be fully informed. If it’s just reading and providing summaries and explaining results, these can be very useful and low-risk functions.”

According to Dixon, the next stage will be to examine “how we go from assertive agency to partial autonomy to high autonomy to full autonomy. At each one of those levels, we need to ask what safety systems and security considerations we need to have to ensure that we don’t introduce unnecessary risk.”

The post Black Hat USA 2024: Eight ways to achieve ‘Secure by Design’ AI appeared first on TechInformed.

]]>
25635
Unilever and Accenture expand GenAI partnership https://techinformed.com/unilever-and-accenture-expand-genai-partnership/ Thu, 05 Sep 2024 13:35:06 +0000 https://techinformed.com/?p=25598 Unilever and Accenture have extended their partnership with the aim of simplifying Unilever’s digital core and enhancing its use of generative AI. The multi-year program… Continue reading Unilever and Accenture expand GenAI partnership

The post Unilever and Accenture expand GenAI partnership appeared first on TechInformed.

]]>
Unilever and Accenture have extended their partnership with the aim of simplifying Unilever’s digital core and enhancing its use of generative AI.

The multi-year program aims to scale generative AI use cases, providing cost reductions and operational efficiencies.

Hein Schumacher, CEO of Unilever said: “We have already introduced 500 AI applications across Unilever, helping us to reach new levels of efficiency. But as AI matures and becomes increasingly intelligent and intuitive, we see so much more potential.”

“With the help of Accenture’s world-class tools and capabilities, we will be able to analyse where and how AI can have the highest transformational impact and deliver the greatest returns.”

Unilever will make use of Accenture’s ‘GenWizard’ platform.

Julie Sweet, chair and CEO, Accenture, said: “This next exciting chapter in our decades-long collaboration with Unilever will raise the bar on how enterprises can scale GenAI to power productivity and value at speed.”

“Accenture’s GenWizard platform will enable Unilever to create a full spectrum of targeted GenAI solutions across its business that can realise efficiencies and cost savings, uncover new ways of working and ultimately help drive competitive advantage.”

Last month, TI spoke to Unilever’s VP for consumer experience technology about how the big name is using artificial intelligence in its beauty experiences.

The firm has launched two consumer-facing apps that use AI to recommend beauty products based on a customer’s selfie.

TI also spoke to the firm’s R&D head of digital about how it is using AI to make its products more sustainable. Read the case study here.

The post Unilever and Accenture expand GenAI partnership appeared first on TechInformed.

]]>
25598
A coffee with… Lauren Pedersen, CEO, SportAI https://techinformed.com/ai-in-sport-a-coffee-with-lauren-pedersen-ceo-sportai/ Wed, 04 Sep 2024 16:59:00 +0000 https://techinformed.com/?p=25578 With a background in competitive tennis, Lauren Pedersen knows what it’s like to receive feedback from various coaches and wonder what the best advice would… Continue reading A coffee with… Lauren Pedersen, CEO, SportAI

The post A coffee with… Lauren Pedersen, CEO, SportAI appeared first on TechInformed.

]]>
With a background in competitive tennis, Lauren Pedersen knows what it’s like to receive feedback from various coaches and wonder what the best advice would be. Last year, she co-founded a fledgling startup, SportAI, that integrates AI in sport.

The B2B sports technology firm aims to enhance sports technique coaching, commentary, and analysis with artificial intelligence and benchmarked by gold-standard athletes.

As its CEO and co-founder, Pedersen talks with TI about how the company integrates AI in sports to provide high-quality technique analysis.

The former CMO for air quality tech firm AirThings, and fintech firms InstaBank and Omny, also discusses the inspiration behind the venture, the process of developing the technology, and its potential impact on the sports industry.

The conversation with the Oslo-based founder also covers the firm’s recent seed funding round, amounting to $1.8 million, with investors including Magnus Carlsen, the highest-ranked chess player of all time, and ex-pro tennis player Dekel Valtzer, as well as Skyfall Ventures.

 

What inspired you to cofound Sport AI?

I’ve been playing sports my entire life and have a particular passion for tennis. I played juniors in New Zealand and NCAA college tennis in the States and continue to play today. My career has been in tech, so now, founding my own company that combines my love of sports with my tech experience is the perfect opportunity for me.

How does your sports background influence the technology?

I understand sport deeply. I know what it means to learn, train, and strive for improvement. Our technology aims to open access to high-quality, objective sports data for everyone. Growing up in New Zealand, I and many others didn’t have access to top-tier technique analysis, which has been reserved for pro players with teams of experts. With advances in AI, particularly computer vision and machine learning, we can analyse techniques, compare them to gold-standard players, and provide immediate feedback for improvement.

Can you describe the process of developing and refining the technology?

As our name suggests, we rely on AI, specifically video analysis. The video can come from various sources like mobile phones, broadcast feeds, or cameras mounted at sports venues. We analyse the footage to understand the technique and compare it to a gold standard. This standard could combine the best players’ techniques or a specific player you want to emulate, like Roger Federer. We then provide immediate feedback on how to improve.

You were just at the US Open. Do you find yourself thinking about how AI would correct techniques as you watch sports?

Definitely, the opportunity to analyse techniques and improve training, data accuracy, and fan engagement across various sports is huge. It can even enhance product recommendations for sports equipment. For example, we can use our technology to help players choose the right tennis racket for their style rather than just imitating a pro’s choices.

What feedback have you received from your early users, and how has it shaped the product?

Technique analysis has traditionally been subjective, expensive, and unscalable. Our technology changes that by empowering coaches with data and giving players trackable improvement metrics. This benefits the industry, and our users will see its potential.

How has collaborating with other professionals in competitive sports such as chess contributed to SportAI’s success?

Collaboration has been crucial. One of our early investors is Magnus Carlsen, the world-renowned chess player and an absolute superstar who has heavily relied on AI to improve his game. He believes AI can significantly enhance training across all sports. Our team combines expertise in sports and technology, creating a strong, diverse foundation.

What are your priorities after the recent seed funding from Skyfall Ventures?

We’re focusing on product development and onboarding customers in three key areas: training and coaching, product recommendations for brands and retailers, and broadcasting. We’re partnering with forward-thinking brands to roll out our technology over the coming months.

Now, how do you take your coffee?

I prefer a large, milky coffee — ideally a large latte. However, I live in Norway, where everyone drinks black coffee without often having the option to add milk. So, adding milk these days is a bit of a luxury.

How do you wind down from your busy schedule?

Tennis and training are still big parts of my life. They keep me in shape and make me a better leader, team member, and family member because I stay in shape and still get to experience the sport. It’s good for mind, body, and soul.

 

Read: Transfer deadline: Using AI in sport to recruit football talent

The post A coffee with… Lauren Pedersen, CEO, SportAI appeared first on TechInformed.

]]>
25578
Clearview AI fined $33m for facial recognition tech https://techinformed.com/clearview-ai-fined-over-33m-for-illegal-facial-recognition-database/ Tue, 03 Sep 2024 15:26:20 +0000 https://techinformed.com/?p=25552 US facial recognition firm Clearview AI has been fined €30.5 million by the Dutch data protection watchdog (DPA) for hosting an “illegal database”. Clearview AI… Continue reading Clearview AI fined $33m for facial recognition tech

The post Clearview AI fined $33m for facial recognition tech appeared first on TechInformed.

]]>
US facial recognition firm Clearview AI has been fined €30.5 million by the Dutch data protection watchdog (DPA) for hosting an “illegal database”.

Clearview AI uses data scraping technology to harvest people’s public photographs from websites and social media platforms to create an online database of 20 billion images of faces and data.

According to the watchdog, Clearview has not objected to the DPA’s decision and would therefore be unable to appeal against the fine.

“Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world,” DPA chairman Aleid Wolfsen said in a statement.

“If there is a photo of you on the Internet – then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film. Nor is it something that could only be done in China,” he added.

It also banned Dutch companies from using Clearview’s services.

The DPA ordered an imposing penalty of up to €5 million if Clearview doesn’t halt the breaches of the regulation.

In a statement to The Associated Press, Clearview’s chief legal officer, Jack Mulcaire said that the decision is “unlawful, devoid of due process and is unenforceable.”

“Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU and does not undertake any activities that would otherwise mean it is subject to the GDPR,” Mulcaire added.

Two years ago, the UK watchdog (ICO) also fined Clearview AI £7.5m for the same reason.

At the time, the ICO said that even though the firm does not offer its services to UK organisations, Clearview still had customers in other regions and found it “likely” that it could still use the personal data of UK residents given the nation’s high number of social media users.

The post Clearview AI fined $33m for facial recognition tech appeared first on TechInformed.

]]>
25552
Klarna plans to halve workforce with AI, Meta developing headset to challenge Apple Vision Pro https://techinformed.com/klarna-cuts-workforce-by-half-with-ai-meta-develops-puffin-headset-to-challenge-apple-vision-pro/ Fri, 30 Aug 2024 10:34:49 +0000 https://techinformed.com/?p=25478 Klarna is leveraging AI to halve its workforce   Payment platform Klarna has unveiled plans to cut its staff footprint by over half as the… Continue reading Klarna plans to halve workforce with AI, Meta developing headset to challenge Apple Vision Pro

The post Klarna plans to halve workforce with AI, Meta developing headset to challenge Apple Vision Pro appeared first on TechInformed.

]]>
Klarna is leveraging AI to halve its workforce

 

Payment platform Klarna has unveiled plans to cut its staff footprint by over half as the Swedish firm bets big on AI.

The Stockholm-based ‘buy now, pay later’ firm said it will use an attrition downsizing policy, which means departing staff will no longer be replaced with new hires.

Instead, Klarna plans to leverage artificial intelligence and automation to replace departing roles in various departments. It has already reduced its headcount from 5,000 to 3,800 in the past year.

Chief executive Sebastian Siemiatkowski heralded the benefits of AI as Klarna revealed its second-quarter results earlier this week.

“Not only can we do more with less, but we can do much more with less. Internally, we speak directionally about 2,000 [employees]. We don’t want to put a specific deadline on that,” he told the Financial Times.

Read more…

 

Will Meta’s Puffin be a new competitor to Apple Vision Pro?

 

Meta has started developing a new mixed-reality headset that is intended to compete directly with the Apple Vision Pro.

According to The Information, the Meta headset, codenamed Puffin, will resemble a pair of glasses more than a traditional VR headset.

The Facebook parent — which already sells headsets following its 2014 acquisition of Oculus — is developing a mixed-reality device that will weigh less than 110g, significantly lighter than the Meta Quest 3’s 515g.

It will achieve this by using a tethered “puck” containing the Puffin’s batter and processor, leaving just the display in the headset, the report claims, although it is unlikely to launch until 2027 at the earliest.

Read more…

Why Uber was fined $324 million for GDPR violations

 

According to the Dutch data protection regulator, Uber has been fined $324 million for violating EU data protection rules.

The Dutch DPA accused the ride-hailing firm of transferring the personal data of its European drivers to US servers, calling it a “serious violation” of the EU’s General Data Protection Rule (GDPR).

Uber said it would appeal the fine, which it claimed was “completely unjustified” as the transfer was “compliant with GDPR during a period of immense uncertainty between the US and EU.”

The watchdog claims Uber transferred information, including ID documents, taxi licences and location data, to its US headquarters over a two-year period but failed to safeguard it.

It launched the investigation after more than 170 French drivers complained to a French human rights group, which issued a complaint to France’s data watchdog.

Read more…

Musk’s Grok chatbot tweaked to address election misinformation concerns

 

The social media platform X has changed its AI chatbot after five secretaries of state in the United States warned it was spreading election misinformation.

Top election officials from Michigan, Minnesota, New Mexico, Pennsylvania and Washington sent a letter this month to Elon Musk complaining that the platform’s AI chatbot, Grok, produced false information about state ballot deadlines shortly after President Joe Biden dropped out of the 2024 presidential race.

The secretaries of state requested that the chatbot instead direct users who ask election-related questions to canivote.org, a voting information website run by the National Association of Secretaries of State.

Before listing responses to election-related questions, the chatbot now says, “For accurate and up-to-date information about the 2024 U.S. Elections, please visit vote.gov.”

The five state secretaries said in a shared statement that both websites are “trustworthy resources that can connect voters with their local election officials.”

Read more…

The post Klarna plans to halve workforce with AI, Meta developing headset to challenge Apple Vision Pro appeared first on TechInformed.

]]>
25478
UK gov launches £2.1m fund to fill skills gap in space sector https://techinformed.com/uk-gov-launches-2-1m-fund-to-fill-skills-gap-in-space-sector/ Wed, 28 Aug 2024 10:00:47 +0000 https://techinformed.com/?p=25348 The UK Space Agency has announced a £2.1 million investment to fund five projects to fill the space industry’s skills gap. The investment will go… Continue reading UK gov launches £2.1m fund to fill skills gap in space sector

The post UK gov launches £2.1m fund to fill skills gap in space sector appeared first on TechInformed.

]]>
The UK Space Agency has announced a £2.1 million investment to fund five projects to fill the space industry’s skills gap.

The investment will go towards training programmes, courses and other learning interventions to boost AI, software, and data skills.

The projects will be led by the universities of Edinburgh, Leicester and Portsmouth, the Royal Institute of Navigation, and Plastron Training – a provider of training services focused on safety in the commercial space sector.

The University of Portsmouth’s course, “Securing the Future of Space: Space Software and Data/AI”, will aim to equip mid-career professionals with tools to navigate the increasing role of AI and data science in space.

“Software, data, and AI development proceeds at such a rate that remaining at the forefront of the sector is challenging, yet these digital skills are critical to drive innovation and meet the objectives of the National Space Strategy,” said Becky Canning, deputy director (Space) at the University of Portsmouth’s Institute of Cosmology and Gravitation.

The course is aimed at existing space sector employees looking for a promotion, and to fill employer gaps, as well as professionals in adjacent industries such as military, engineering, defence, and maritime who want to enter the space industry.

The University of Leicester’s course will concentrate on sustainable space engineering, law and operations, and the University of Edinburgh’s course will focus on software and data, as well as transferrable skills (find out more, here.)

In the same announcement, the UK and European space agencies unveiled that they are strengthening work on the European Centre for Space Applications and Telecommunications, with a focus on the centre’s 5G/6G hub. Its attention will be on satellite telecommunications and the applications of satellite services.

The space agencies will also begin exploring the potential for a space quantum technologies laboratory, as well as developing in-orbit servicing, assembly and manufacturing of satellites – with the aim to keep satellites in orbit and prolonging their lifetime and minimising waste in space.

Read: Sustainability in space

The post UK gov launches £2.1m fund to fill skills gap in space sector appeared first on TechInformed.

]]>
25348
McAfee and Lenovo unveil AI-powered deepfake detector https://techinformed.com/mcafee-and-lenovo-unveil-ai-powered-deepfake-detector/ Wed, 21 Aug 2024 11:47:50 +0000 https://techinformed.com/?p=25194 Antivirus firm McAfee has launched an AI-powered deepfake detector exclusively on Lenovo AI PCs. The deepfake detector claims to automatically alerts users if it identifies… Continue reading McAfee and Lenovo unveil AI-powered deepfake detector

The post McAfee and Lenovo unveil AI-powered deepfake detector appeared first on TechInformed.

]]>
Antivirus firm McAfee has launched an AI-powered deepfake detector exclusively on Lenovo AI PCs.

The deepfake detector claims to automatically alerts users if it identifies AI-generated audio in a video, helping consumers discern real from fake with what it claims to be a 96% accuracy rate.

McAfee added that the technology leverages the Neural Processing Unit (NPU) in Lenovo AI PCs to perform the entire identification process within the device.

According to the firm the on-device processing will save users from risking the exposure of video data by manually uploading videos to deepfake-detecting websites, or cloud-based alternatives.

The cyber sec vendor added  that the detector doesn’t collect or record a user’s audio – and people can only opt into the tool and can turn it back off if they want to.

According to McAfee, less than a fifth of social media users in the UK find it easy to spot AI-generated content. Almost half (45%) of victims of voice cloning or other deepfake scams in the UK lose money, and almost a quarter have lost more than £1,000.

By digitally imitating the voice or face of a firm’s C-level executive or a senior manager, an attacker can make payment requests, order wire transfers, request changes to bank information, or ask for invoices and billing addresses to be updated.

In May this year, the head of WPP Mark Read revealed how the ad agency had been the target of a deepfake scam, with fraudsters using a voice clone of the CEO alongside YouTube footage of the exec, while impersonating him in the chat window of a WhatsApp meeting.

“Knowledge is power, and this has never been more true than in the AI-driven world we’re living in today,” said SVP of product at McAfee, Roma Majumder.

“No more wondering, ‘Is this Prince William investment scheme legitimate?’ , ‘Does Taylor Swift really want to giveaway cookware to fans?’ or ‘Did Sir Keir Starmer actually say these words?’ The answers are provided to you automatically and within seconds with McAfee Deepfake Detector.”

Read more on TechInformed’s coverage of deepfakes in the Year of Elections here

The post McAfee and Lenovo unveil AI-powered deepfake detector appeared first on TechInformed.

]]>
25194
Netflix, Salesforce, and Disney among majority of Fortune 500 companies who consider AI “a risk” https://techinformed.com/netflix-salesforce-and-disney-among-majority-of-fortune-500-companies-who-consider-ai-a-risk/ Tue, 20 Aug 2024 16:34:02 +0000 https://techinformed.com/?p=25176 The majority of Fortune 500 companies consider AI a risk, according to a report by observability platform Arize. In its review of Fortune 500 annual… Continue reading Netflix, Salesforce, and Disney among majority of Fortune 500 companies who consider AI “a risk”

The post Netflix, Salesforce, and Disney among majority of Fortune 500 companies who consider AI “a risk” appeared first on TechInformed.

]]>
The majority of Fortune 500 companies consider AI a risk, according to a report by observability platform Arize.

In its review of Fortune 500 annual reports (as of May 1st this year) the firm found a 250% increase in mentions of AI since the release of 2022 statements, with an almost 500% (473.5) rise in companies citing AI as a risk factor.

Over half of the firms (64%) have mentioned AI in their annual financial report and over one in five specifically reference generative AI.

Over two-thirds of companies mentioning generative AI do so in the context of risk, whether it is through the use of it, or external competition, or security threats to the business.

The industry most concerned is advertising, media, and entertainment, where over 90% cited the burgeoning tech as a risk factor.

Streaming platform Netflix, for instance, said it is concerned about failing to keep pace with competitors or achieve AI goals.

“If our competitors gain an advantage by using such technologies, our ability to compete effectively and our results of operations could be adversely impacted,” its report read.

Disney’s annual report expressed concerns over the regulation of AI and its potential to upend data pipelines or business lines relying on machine learning models.

“Rules governing new technological developments, such as developments in generative artificial intelligence, remain unsettled,” it read. “[These] may affect aspects of our existing business model, including revenue streams for the use of our IP and how we create our entertainment products.”

Telco giant, Motorola, joins firms such as CRM platform Salesforce in expressing concern over potential reputational damage.

“As we increasingly build AI, including generative AI, into our offerings, we may enable or offer solutions that draw controversy due to their actual or perceived impact on social and ethical issues resulting from the use of new and evolving AI in such offering,” Motorola’s report said.

“Although we work to responsibly meet our customers’ needs for products and services that use AI, including through AI governance programs and internal technology oversight committees, we may still suffer reputational or competitive damage as a result of any inconsistencies in the application of the technology or ethical concerns, both of which may generate negative publicity..”

Other firms, such as pharmaceutical firm Vertex, are most concerned about data leakage and heightened cyber security risks when it comes to the emerging technology.

“Risks relating to inappropriate disclosure of sensitive information or inaccurate information appearing in the public domain may also apply from our employees engaging with and use of new artificial intelligence tools, such as ChatGPT,” it said.

Read: Would you take a drug created with AI?

Arize’s review concludes that while most Fortune 500 firms are mentioning risk as a factor when it comes to AI, “there is a real opportunity for enterprises to stand out by highlighting their innovation and providing context on how they are using generative AI.”

The post Netflix, Salesforce, and Disney among majority of Fortune 500 companies who consider AI “a risk” appeared first on TechInformed.

]]>
25176