Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
July 16 is a day to celebrate the benefits of AI, while also being aware of the risks of its poor implementation. Here’s what the experts have to say.
Kumar Mitra
Executive Director, CAP & ANZ, ISG, at Lenovo
AI Appreciation Day is a timely reminder that AI is no longer a distant frontier – it’s a business-critical priority. Yet, many organisations remain constrained not by imagination, but by infrastructure. As AI grows more sophisticated, enterprises must rethink how data is processed, decisions are made, and outcomes are delivered.
In Australia, Lenovo’s CIO Playbook 2025 with IDC shows that over 63 per cent of CIOs see aligning AI to business strategy as a top priority, yet 58 per cent cite data infrastructure as their biggest barrier. It’s clear that scaling AI isn’t just about capability – It’s about readiness.
That’s why Lenovo believes the future lies in Hybrid AI – a distributed, secure, and agile approach that enables AI to run where it makes the most impact, whether in the cloud, on-premises, or at the edge.
AI done right can drive both economic and human progress. Whether it’s transforming supply chains, accelerating healthcare diagnostics, or enabling more inclusive solutions for people with disabilities, the opportunity is profound.
This AI Appreciation Day, organisations must think beyond pilots and embrace AI as a strategic driver of sustainable growth, inclusion, and real-world impact.
Jeremy Pell
Area Vice President for ANZ at Elastic
AI has reached a critical inflection point, evolving from basic automation to intelligent systems that can connect enterprise applications and workflows. For these systems to work seamlessly, they need fast and accurate access to all of their structured and unstructured data.
This evolution extends into everyday life. For example, Uber uses Elastic’s Search AI platform to enhance user experiences across its Uber Eats and ride-matching apps, providing fast, personalised, and context-aware search results, and efficiently matching riders with drivers globally.
AI also plays a pivotal role in security. Cybersecurity teams must leverage generative AI to strengthen defenses against threats like deepfakes and phishing. AI-enhanced search helps companies reduce false positives and significantly improves the speed of incident detection and resolution.
On AI Appreciation Day, it’s timely to reflect not just on the progress we’ve made but on the responsibility ahead. The Business Council of Australia’s AI Agenda highlights the need for decisive action to position Australia as a global leader by 2028, focusing on digital infrastructure, skills development, and safe adoption across sectors.
AI has the potential to reshape industries, boost productivity, and improve our quality of life, but only if it’s underpinned by platforms that are open, adaptive, and capable of providing a real-time, complete view of all data. At Elastic, we’re proud to be building technologies that empower organisations to harness this new era of AI with confidence and agility.
David Allott
Field CISO APJ, Veeam
On AI Appreciation Day, most conversations focus on the risks AI introduces. But in cybersecurity, we are overlooking a bigger opportunity: AI is also helping us rethink how we defend and recover.
AI is enabling security leaders across APJ move from reactive defence to proactive resilience. At Veeam, we are integrating AI to help businesses better understand and protect their data. It is not just about knowing where it is stored, but understanding what it contains, who has access, and whether it is subject to compliance issues or exfiltration risk.
As companies adopt hybrid-cloud models and operate across legacy and modern infrastructure, this visibility becomes harder to achieve. AI cuts through the complexity by identifying anomalies faster, identifying misconfigurations before they become vulnerabilities, and aligning backup strategies with actual data value.
While AI is not a silver bullet for security, it is a tool that delivers clarity and context on how to be more resilient. For example, Veeam enables AI directly into platforms to assist with threat diagnostics and intelligent support for backup admins. Tools like VeeamONE deliver AI-powered threat detection, while Recon Scanner (a feature of Veeam Data Platform) helps identify risks before they escalate. This ensures organisations can protect the right data in the right way before threats emerge.
Jimmy Mesta
Co-Founder and CTO of RAD Security
Security teams use stacks that generate thousands of signals a minute across dozens of tools. It’s no longer possible to define every relationship between those signals with rules alone. AI is now actually the only way teams can keep up. Instead of using clumsy rules that keep breaking, AI can spot patterns, connect events across multiple parts of the security stack, and take action fast enough to matter. It’s basically necessary to use AI for at least some of these tasks, if you want a lean security team to continue to function at scale with a mature stack.
Josh Mason
CTO of RecordPoint
A massive AI transformation is underway across all levels of the enterprise, from engineers vibe-coding whole applications in days, not weeks, to executives streamlining communication and strategy.
The companies that will experience success with this transformative technology will be those who rethink their entire business model and governance approach. Signing up for a company Copilot or Chat GPT license isn’t enough – and it doesn’t manage your risk. You have to make sure you’re governing your data and using the technology responsibly and ethically, in a way that benefits your customers and employees.
That’s why many businesses have struggled to implement GenAI tools, finding themselves stuck at the pilot phase. According to one study, only 6 per cent of businesses reported moving to a large-scale deployment of Copilot. The number one reason is poor governance of unstructured data.
The key is for businesses to get ahead of the curve on ethical AI governance. By proactively aligning their use of AI with principles of customer focus and employee wellbeing, they can unlock the benefits of the technology while mitigating the risks. This kind of responsible, forward-looking approach will be critical for success in the years to come.
At the core of the solution is data. Companies need to focus on locating and understanding the sensitive customer data they have, and their obligations when it comes to data retention and minimization, access and security. They can accelerate AI adoption with a risk-based, data-centric approach to identifying fit for purpose data for model training, ensuring that the only data that goes into an AI model is that which does not contain confidential or sensitive information.
With the rise of agentic AI, these models are becoming more embedded into our working and personal lives. Businesses that prioritise governance – with a focus on data at the core – will be those best positioned to benefit from this transformation.
Andrew Kay
Director Systems Engineering, APJ, at Illumio
AI Appreciation Day is the perfect time to reflect on the impact of AI on the businesses we all rely on. When it comes to cyber security – there are big gains, but drawbacks too.
On one hand, AI is increasingly being used by threat actors to ramp up their attacks. Today’s threats are not only becoming more sophisticated – they’re also becoming more accessible, allowing even novice cybercriminals to carry out highly effective attacks. AI threats aren’t just a talking point for cyber experts – they’re real and impact everyday workers. A finance worker at a multinational Hong Kong bank, for example, transferred US$25 million after attackers used a Zoom deepfake scam to pose as the company’s chief financial officer.
On the flip side, AI is enhancing cyber security technologies in new and exciting ways, too. AI cloud detection and response (CDR), for example, is a new tool that identifies lateral movement risks, detects attacks, and contains breaches instantly – all at cloud scale. AI is powering security graphs – so organisations can visualise risk and get an unparalleled view of their hybrid cloud attack surface. This kind of visualisation and observability is incredibly important for organisations to identify unusual patterns and behaviours of attackers that otherwise largely go undetected.
With AI-powered attackers constantly scanning for gaps and adapting their techniques to evade defences they discover, prevention isn’t enough. Organisations need to meet fire with fire. They need containment, with AI-driven context and visibility, to block lateral movement and limit the blast radius when something breaks through.
Justin Hurst
Chief Technology Officer APAC at Extreme Networks
Across Australia, organisations are evaluating how much and how quickly they should lean into AI. Ignoring the trend is not an option, but jumping in without a plan is not viable either. Over the next year, businesses will need to set realistic, outcome-based goals for how AI-powered solutions and platforms will be deployed.
At the same time, IT leaders need to think beyond isolated tools and take a more holistic view, taking a strategic approach that combines talent development, infrastructure modernisation, and cultural transformation. For example, training in areas such as data literacy, AI systems, and network automation should be treated as a strategic priority, not an afterthought.
It is also worth remembering that AI is not just about squeezing out more efficiency. When implemented thoughtfully, it can create a feedback loop for continuous improvement. But this only works in environments where teams have the freedom to experiment, iterate, and occasionally fail without penalty.
The future of network engineering is not about replacing people with AI, but about enabling them to work smarter and more strategically. Enterprises that embrace this shift will be well-positioned to achieve greater agility, sharper competitive advantages, and faster innovation in an AI-driven world.
Patrick Harding
Chief Product Architect at Ping Identity
AI Appreciation Day is a timely reminder of the incredible promise and growing complexity that AI brings to our digital world. From deepfakes to autonomous agents, AI has transformed the landscape of identity-based cyber threats, making it increasingly difficult to verify who, or what, is behind a digital interaction. Without the right safeguards, these technologies risk eroding the trust that underpins everything from financial services to healthcare. Yet AI is also a powerful tool for defence. When deployed responsibly, it can enhance real-time risk detection, behavioural analysis, and adaptive authentication, helping organisations prevent fraud while improving the user experience.
As AI continues to evolve and agents become more autonomous, now is the time for organisations to rethink identity models, ensure secure delegation, and prepare systems to recognize and authenticate not just people, but the intelligent processes acting on their behalf. Building and maintaining trust in every digital interaction is more essential than ever, and organisations must ensure their identity strategies evolve in lockstep with the technology driving today’s transformation.
Shaun Leisegang
General Manager ‑ Automation, Data and AI, at Tecala
Each year, AI Appreciation Day invites us to pause and reflect, not just on how far artificial intelligence has come, but on what it’s actually for. It’s a timely reminder that the real power of AI lies in what it enables people to do. AI is not a replacement for human potential, but rather a partner in unlocking it. It’s not about machines taking over, it’s about people stepping up.
As organisations increasingly adopt AI across their operations, the conversation is shifting from efficiency to empowerment. The best applications of AI aren’t those that eliminate roles, but those that eliminate friction – freeing people to do more meaningful, creative, and strategic work.
The future of work isn’t just about automation – it’s about meaningful augmentation. Rather than replacing people, AI agents are creating the conditions for people to focus on what humans do best: creativity, strategy, and complex problem-solving.
AI Agents embodies such approach. These prebuilt agents are designed to integrate seamlessly into everyday workflows – handling tasks like expense claims, leave approvals, and email triage. Delivered through a flexible Automation-as-a-Service model, they help businesses move beyond pilots and proofs-of-concept, embedding automation where it matters most.
On AI Appreciation Day, it’s worth remembering that the value of AI isn’t in the code – it’s in what it unlocks for people. The real opportunity lies in redesigning work so that human talent is no longer wasted on low-value tasks.
Les Williamson
Regional Director, ANZ, at Check Point Software Technologies
The boom in AI, including generative, investments within heavily regulated industries including financial institutions has unlocked immense opportunities for innovation, but it has also introduced new risk surfaces, particularly concerning data security, risk scoring, auditing, and regulatory compliance. Cybercriminals are quickly adapting, exploiting vulnerabilities in AI-driven processes. These include attacks such as data poisoning, manipulation of machine learning models, or the use of AI to conduct highly sophisticated cyberattacks. Furthermore, integrating AI tools into hybrid architectures can lead to inconsistencies in security protocols if not carefully governed.
A well-governed AI can revolutionise cyber security, streamline auditing processes, and ensure regulatory compliance across industries. This is all important in the light of a recent Check Point AI Security Report which found that AI services are used in at least 51 per cent of enterprise networks every month. In addition, the report shows that 1 in every 80 prompts (1.25 per cent) sent to GenAI services from enterprise devices was found to have a high risk of sensitive data leakage, and an additional 7.5 per cent of prompts (1 in 13) contained potentially sensitive information. Here at Check Point, we’ve been using AI in our solutions for the last decade, way before it became popular, with our ThreatCloud AI acting as the central nervous system for our security solutions keeping organisations both secure and productive without unnecessary disruptions.
Effectively communicating complex AI risks to business leaders and boards is essential. IT and cyber security risks, including those associated with AI, are no longer optional considerations – they are strategic imperatives. By analysing risks from both qualitative and quantitative perspectives, business leaders can better understand and weigh security risks against financial benchmarks. This approach helps justify security investments based on clear financial principles.
At the same time, adopting a proactive, preventive approach is critical for building trustworthy AI systems and avoiding costly retrofits or compliance failures down the line. Many AI risk frameworks emphasise embedding security and privacy measures ‘by design’ during the AI development lifecycle. By addressing foreseeable privacy and ethical concerns early in the System Development Life Cycle (SDLC) and implementing robust protective mechanisms, organisations can prevent unauthorised access and misuse of data and models from the outset. Taking these steps allows businesses to manage AI risks effectively while fostering innovation and trust.
Pieter Danhieux
CEO and Co-Founder of Secure Code Warrior
The past three years have seen some remarkable progress in the AI space, with unprecedented implementation of the technology across multiple sectors.
Developments in agentic AI tools represent a new frontier for productivity and automation, but the real magic lies in how skilled humans utilize them to reach new heights in their roles. By freeing up repetitive tasks in well-tested, secure environments, everyone from software engineers to academic researchers can use their precious time more effectively to innovate and create. With that in mind, it is imperative we continue to place value in human expertise, experience and critical thinking, especially when it comes to secure navigation and implementation of these tools.
Hallucinations and AI-borne vulnerabilities remain a chief concern, and it's AI-savvy humans applying their knowledge that will unlock the productivity gains many promise, with the safety and nuance required to truly move the needle.
Gareth Cox
Vice President Sales, APJ, at Exabeam
As AI technologies continue to evolve, security teams must approach AI-specific risks with a multifaceted approach. The risks associated with overreliance on AI can be addressed through the following security tactics and methods:
A strategy of knowledge management: To maximise the effectiveness of AI while minimising risks, businesses should focus on knowledge management strategies that customise AI systems to their specific problem domains. This can be achieved by employing Retrieval-Augmented Generation (RAG) to integrate domain-specific knowledge bases or by fine-tuning models to align with organisational needs.
UEBA Detection: User entity and behaviour analytics (UEBA) can identity a legitimate user account exhibiting anomalous behaviour by using behavioural profiling and analysis to provide insights. It can also view multiple systems as a whole and identify the anomalous activity as it moves laterally across the network. Overall, it improves the speed of threat detection and response, making cybersecurity more effective and efficient in a rapidly evolving threat landscape.
24/7 monitoring: To detect and respond to threats as they happen, security teams should prioritise continuous, real-time monitoring. Additionally, employing bias detection and mitigation techniques can ensure fairness and reliability in AI results.
Staff training: Awareness of the limitations and best practices for AI use is crucial. This includes training users to cross-verify AI outputs while remaining sceptical of overly confident responses.
By utilising a cautious and innovative security plan, businesses can maximise the potential of automated technology without jeopardising sensitive information or negatively impacting business operations.
Ezzeldin Hussein
Senior Director, Solutions Engineering, at SentinelOne
On this World AI Appreciation Day, we pause to reflect – not just on how far we’ve come, but on the limitless future ahead. A decade ago, artificial intelligence was largely experimental, often misunderstood, and cautiously adopted. Today, it shapes our everyday lives – from personalised healthcare and smarter cities to securing cyber space and decoding complex global challenges.
What once seemed like science fiction is now the pulse of progress. AI no longer just analyses data; it reasons, predicts, and adapts. It collaborates with humans, augments our creativity, and even safeguards our digital and physical environments. In cyber security, for instance, AI has shifted the balance – empowering defenders with predictive insights and autonomous threat response.
Yet this is only the beginning. The next frontier lies in ethical, responsible AI—where transparency, fairness, and human oversight are embedded into every algorithm. We are stepping into an era where AI becomes not just a tool, but a trusted partner.
As we appreciate what AI has already enabled, let’s also imagine what it can do—if guided by human values, inclusive design, and bold innovation. The future is not about AI replacing us, but AI elevating us.
George Moawad
Country Manager Oceania, at Genetec
AI continues to attract interest, with the Genetec 2025 State of Physical Security Report finding 42 per cent of security decision-makers expressing a keen interest in AI-driven solutions.
However, concerns about privacy, ethics, and data bias remain pivotal. Businesses now are increasingly focusing on responsible AI adoption, emphasising transparency, governance, and adherence to ethical standards.
Today, there’s a dual focus on AI's potential for operational efficiency and the need for stringent governance protocols. For example, AI-powered analytics are enhancing situational awareness and reducing response times, however organisations demand assurances that such tools respect privacy and comply with regulatory frameworks.
AI’s role in security extends beyond analytics. Predictive modelling, for example, enables systems to anticipate and prevent potential threats before they materialise.
However, AI Appreciation Day is a good opportunity for Australian enterprises to be advised that this capability also raises concerns about over-surveillance and potential misuse. As a result, companies should establish AI oversight committees and protocols to address these challenges proactively.
Fabio Fratucello
Field CTO World Wide at CrowdStrike
AI is lowering the barrier to entry for adversaries, allowing them to automate social engineering, misinformation campaigns, and credential harvesting at unprecedented speed and scale. CrowdStrike's 2025 Global Threat Report reveals adversaries are using large language models (LLMs) for sophisticated phishing and business email compromise attacks that closely replicate human behavior.
At the same time, AI is transforming organisations' ability to detect and respond to cyber threats. Under immense pressure from rising alert volumes, faster breakout times and a persistent shortage of skilled analysts, security teams must leverage AI to protect their organisations and move from reactive response to proactive threat disruption. Automating time-consuming and repetitive tasks, agentic AI can be a force multiplier for security teams by enabling them to focus on understanding adversary behaviour, hunting advanced threats and stopping breaches before they escalate.
CrowdStrike's Charlotte AI Agentic Detection Triage exemplifies the power and promise of AI, by autonomously validating and prioritising threats with over 98% accuracy, to save security teams up to 40 hours per week in manual alert triage. Built within a bounded autonomy framework, Charlotte AI allows organisations to define how and when automated decisions are made, giving analysts full control to set thresholds, determine when human review is required, and maintain oversight.
This combination of machine speed and human-defined guardrails empowers defenders and ensures organisations can operate at the speed of threats. Organisations should use AI Appreciation Day as a catalyst to embrace transformative AI-powered security capabilities within their security posture, which empower security teams to take back control, reduce burnout, and decisively shift the AI advantage back in their favour.
Laura Ellis
VP of Data & AI at Rapid7
AI has completely changed how businesses operate. It streamlines processes and helps teams make smarter decisions, leading to better outcomes for customers. However, it is important that every day, not just on AI Appreciation Day, we honour the people who tirelessly dedicated their time, knowledge, and drive to building and leveraging these technologies. Their work not only ensures that AI is a tool for efficiency, but also a force that can make the world a better place.
It is now our responsibility to use this technology with intention, keeping it human-centric, transparent, and ethical, so it can continue to drive meaningful impact.
Jurgen Hekkink
Head of Product Marketing at AnywhereNow
On AI Appreciation Day, we recognise the incredible impact that artificial intelligence (AI) is having on the people at the heart of customer service, the contact centre agents.
AI is quietly revolutionising how contact centres operate. Intelligent automation now handles routine queries swiftly and accurately, drastically cutting wait times and improving customer satisfaction. This allows human agents to focus on what they do best: solving complex problems, connecting with customers and delivering personalised experiences.
AI-powered assistants play a crucial role behind the scenes, offering real-time guidance, surfacing relevant information, and reducing the stress of multitasking under pressure. With AI support, agents are better equipped to resolve issues on the first contact, boosting critical performance metrics like Customer Satisfaction (CSAT) and Net Promoter Score (NPS).
However, it’s not just about customer outcomes. AI is also reshaping the agent experience for the better. By automating repetitive tasks, agents gain more time for meaningful work. They enjoy greater autonomy, reduced cognitive load and have a clearer sense of purpose. This leads to higher job satisfaction, reduced burnout, and a stronger, more resilient workplace culture.
Today, we celebrate how AI is enhancing the role of humans in contact centres. AI is helping agents work smarter, feel increasingly supported, and deliver service that’s faster, more consistent and more human than ever.
Helen Masters
Managing Director, APJ at Smartsheet
As we mark AI Appreciation Day, it's clear we've moved beyond the novelty stage. Today's conversation focuses on how effectively we can integrate AI into our everyday lives. Across Australia, businesses are rapidly adopting AI not as a standalone solution, but as a strategic enabler, one that alleviates the burden of repetitive tasks and empowers people to focus on high-value, impactful work. The result? A measurable lift in productivity, with teams working smarter and achieving more in less time.
At Smartsheet, we're witnessing this transformation unfold in real time. By embedding AI directly into our platform, we help teams surface insights faster, streamline operations, and make more informed decisions. However, realising AI's full potential depends not only on functionality but also on its intuitiveness, transparency, and ease of adoption.
As AI evolves to become more agentic and autonomous, it's imperative for leaders to ensure its application is guided by purpose, transparency, and humanity. The future of AI won't be defined solely by its advancements but by the wisdom with which we choose to utilise it.
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.
Be the first to hear the latest developments in the cyber industry.