Notes from GAIM Ops Cayman 2026 on the New Standard of Care for AI-Era Cybersecurity
At GAIM Ops Cayman 2026, Abacus CEO Anthony J. D’Ambrosi moderated a wide-ranging panel on cybersecurity and the regulatory environment in the age of AI. He was joined by Kathleen McGee, partner at Lowenstein Sandler and former Chief for the Bureau of Internet and Technology at the New York Attorney General’s office, and Stu Solomon, CEO of HUMAN Security, an enterprise focused on protecting the digital experience by safeguarding the entire customer journey—from first ad impression to final transaction. What followed was a candid conversation about where the threat landscape is moving, what regulators now expect, and how alternative investment firms can adopt AI without losing control of their data, their defenses, or their accountability.
A few themes from the discussion are worth pulling forward for the alternative investment community.
AI Is Closing the Gap Between Defenders and Attackers
For three decades, cybersecurity has operated under a familiar, uncomfortable principle: the attacker only has to be right once, while the defender has to be right every single time. The release of Anthropic’s Mythos model has reignited that conversation, and the prevailing narrative is that the gap between the two has widened considerably. Stu offered a different reading of the moment. Yes, vulnerabilities once known only to a handful of researchers are now surfacing at unprecedented speed, and yes, less sophisticated actors are now capable of outsized impact. But for the first time, the defender is operating with the same lens. The tools that compress an attacker’s innovation cycle compress the defender’s as well, and the advantage shifts to whoever can mitigate fastest and most clearly understand what is truly critical inside their environment.
For investment firms, that reframing matters. AI is being absorbed into the business at speed, and the value firms get from it depends entirely on the infrastructure built around it—the security to defend it, the governance to control it, and the people and processes to run it safely.
What “Reasonable” Now Demands in the AI Era
Perhaps the most consequential shift of the past year has come from regulators themselves, who have been quietly but deliberately redrawing what they consider a reasonable standard of care. As Kathleen reminded the room, ignorance is no longer a defense. Every firm in the industry now has access to the same tools to conduct proactive risk discovery, and regulators expect that discovery to actually happen—with the work product to prove it. Stu captured the new posture in a single line: “The question is, do you really want to know? And if you do know, what are you doing about it?”
Enforcement is following the framing. The recent amendments to Reg S-P, paired with the SEC’s increasingly vigorous stance toward registered advisors, have raised the bar well beyond “paper compliance.” Regulators are looking for record-keeping compliance: documented evidence that procedures are being followed, that data has been mapped across investment strategies, and that controls live where the data lives. The federal picture is otherwise fragmented: the FTC remains uncertain in its posture, state regulators are still finding their footing, and the SEC is the agency moving most decisively, both forward and backward through prior years of conduct.
The international picture is moving on a parallel track. Early regulation out of Europe is centering on the quality of data going into models, the guardrails placed around them, and the trustworthiness of their outputs. The pragmatic implication for firms is the need for basic, enforceable internal policies that define what “normal” looks like—data classification, data handling, and data leakage prevention. The playbook will feel familiar to anyone who lived through the cloud migration cycle a decade ago, applied now to a higher-stakes asset class. Kathleen’s framing for what all of this requires of firms was the simplest line of the panel: “Treat compliance as the floor, not the ceiling.”
Data Mapping and Digital Identity Are the Foundation
Two ideas surfaced again and again throughout the discussion: consent and validation. Knowing where your data sits, who has permission to touch it, and whether the model touching it is actually behaving as intended—those are the controls that determine whether AI use is defensible, both to regulators and to investors. Kathleen added a nuance worth absorbing: a model producing the right output does not always mean the tool is working correctly. Sometimes it just got lucky. Validation, properly understood, is an ongoing discipline—and one of the areas firms are most prone to underinvesting in.
Identity is the connective tissue that holds all of this together. An AI agent, as Anthony framed it, behaves like a digital employee—with permissions that can be granted, monitored, and revoked, and with behaviors that have to be actively orchestrated and governed in real time. That is precisely why the security industry is leaning so hard into identity right now: digital identity, and the permissions associated with it, is where the next generation of risk and the next generation of automation converge.
For firms still treating data mapping as a back-burner project, the panel was unambiguous: it has become both a regulatory requirement and a real business advantage. The firms that map first will be the ones in a position to monetize the result.
Triage Speed Defines the Real Cost of a Breach
The conversation about breach cost moved past the obvious. The real metric, as Stu described it, is the speed at which a firm can identify what “normal” looks like, recognize deviation from it, and mitigate impact down to a level of residual risk the business can actually tolerate. Driving residual risk to zero will never be achievable or cost-effective; what firms can build, and what investors and regulators increasingly expect, is a triage process that is clear, fast, and trusted across the business.
Kathleen layered on a sharper edge. The law now requires firms to take their investors’ risk tolerance into account in ways they previously did not, particularly with respect to AI use, AI adoption, and data security. That changes how risk frameworks need to be built, how they are documented, and how they are enforced—because the audience for those frameworks now includes LPs as well as regulators.
The operational reality, as Anthony described it from Abacus’s vantage point on the front line of incident response, is that when a breach occurs, regulators Monday-morning quarterback the response. Why didn’t you act sooner? Why did it take five incidents before this was escalated? How were you supposed to know that an event blending quietly into normal traffic was a sophisticated actor pen-testing your environment? The answer, increasingly, is AI-enriched triage—the ability to enumerate the environment, score asset criticality, and route alerts intelligently—because that is what shortens the gap between an event and a contained incident. And reputationally, the post-breach response is now scrutinized as closely as the breach itself.
Accountability Stays With the Human
The panel’s most pointed exchange turned on agentic AI. Agentic systems, in the definition Stu offered, are characterized by the ability and authority to act on behalf of a human. What is critical to understand—and is often missed in market commentary—is that accountability does not transfer with that grant of authority. Even as agents learn, anticipate, and self-heal, the responsibility for their actions remains squarely with the person who set them in motion. Until policy and enforcement position that accountability clearly with humans, the panel’s shared view was that fully autonomous adoption will, and should, move slowly.
A useful piece of context on scale: of the roughly 20 trillion digital interactions HUMAN Security observes each week, about 3.5 percent are already agentic in nature. And those agents are increasingly doing meaningful work on behalf of the entities they represent—logging into accounts, executing transactions, and interacting with downstream systems at high volume. Agentic AI has already arrived at internet scale.
Hiring and Training for the AI Era
The panel closed on people, and rightly so. For Stu, the hardest part of scaling AI has been finding the right talent. Engineers and leaders with the mindset and cultural orientation to use these tools responsibly are genuinely difficult to identify, and the work begins at the screening stage. Kathleen offered a complementary point that applies at every level of the organization: the most pressing AI risk inside many firms today is cultural. Junior staff increasingly accept AI-generated outputs at face value, skipping the pressure-testing that more experienced colleagues would instinctively apply. Critical thinking, in that sense, has become the most underrated skill in the modern firm—and a quiet case for the enduring value of a liberal arts education in a technical world.
Anthony closed with a reference that captured the moment well. In a recent conversation, the CEO of Uber framed the asymmetry this way: when a human driver gets into an accident, it is expected. When an automated vehicle does, it is the lead story. AI in financial services will be held to the same higher bar. The firms that earn trust will be the ones that built the policies, the procedures, and the people to deserve it.
The Takeaway
If there was a unifying message from the panel, it was the line Anthony used to close the discussion: “AI is not a trend, it’s not a tool, it’s not a piece of software. It’s a fabric upon which we will do business.” The firms that win in this next chapter will be the ones that pair rapid adoption with disciplined governance: clear data classification, defensible identity controls, fast incident triage, and a culture of accountability that keeps human judgment squarely in the loop.
To learn how Abacus partners with alternative investment firms to operationalize this shift across managed IT, cybersecurity, and AI governance, get in touch with our team.
