Family Offices & Artificial Intelligence: Getting Started

Mar 16, 2026

 

Below is a recap of the keynote fireside chat from Archway's Immersion Lite Conference in Dallas, presented from the perspective of Archway’s CEO, Anthony Abenante. 

Family Offices & Artificial Intelligence: Getting Started

I recently had the opportunity to sit down with Catherine Fankhauser, Partner and Practice Leader, Family Office Advisory Services at Ernst & Young (EY), where she shared her insight on Family Offices and how they are approaching the utilization of Artificial Intelligence (AI). She brought a well-informed perspective given that she spends 100% of her time with Single Family Offices (SFOs) in areas including operations, governance, and risk.

As we began our conversation, there was clear agreement that AI represented lots of things to lots of people; discussion and debate is clearly ubiquitous at this point, both professionally and personally. As EY works with a broad swath of Family Offices, I asked Catherine to provide insight into how they’re collectively thinking about the implications of AI adoption for their operations. In effect, what's driving both their motivation to adopt AI as well as their fear of its implications?

Further, a growing – and now prevailing – swath of their clients is fully aware that they can no longer ignore AI but just don’t know how to get started. It's as if people are lined up with their toes at the edge of a swimming pool, looking to see which of their friends have jumped in. They’re seeing other organizations in their ecosystem -- like banks and investment firms -- getting in said pool. And as a result, they ask: should we get in the AI pool as well? She thought many are waiting for that first real use case representing a tangible reason to take the plunge. In many ways, her swimming pool analogy was a perfect way to summarize current state.

And, at the same time as they are standing by that AI pool, we see these Family Office leaders reading about the vast amounts of money pouring into projected required infrastructure to facilitate LLMs by the likes of Alphabet, Meta, and Microsoft. So instead of asking how cold the water is, they should be asking how and/or where to commence their respective AI journeys. What should they be thinking about in a practical way, as to not get stuck in a sinkhole of discovery while overspending and ending up with intangible results

Given all the above, the natural next question to pose was around how people should think about getting started?

What follows are Catherine’s suggested five areas of focus for initiating an AI journey…

1. Get Educated

When people talk about AI, they employ a uniquely different and growing vernacular. Get educated on the types of AI, its componentry and its terms. To be clear, not all these terms are interchangeable: Robotic Process Automation is not machine learning; Large Language models (LLMs) are not agentic AI. Simply put, it’s imperative to get educated.

Further to that point, I made reference to a growing set of great sources of AI information available and that Archway plans on subsequent posts highlighting many of these resources as we all collectively learn, develop, and share better AI deployment and utilization practices. Stay tuned.

2. Have Good Data

Catherine stressed that AI capabilities and functionality are only going to be as good as the data it is processing. Bad data = bad output, rendering AI utilization of no use. She gave the example of an EY client Family Office where the client was having a horrendous time with wash sales and preparing tax returns. EY has an AI tool to analyze all your different accounts and come up with your wash sales. Unfortunately, the Family Office couldn't make use of the tool because they couldn't generate the data to feed it. We both agreed that now is time to start getting your data organized into modernized data storage structure(s) housed in a repository where you can aggregate all your data sources.

3. Access Control

When thinking about access control, I typically conjure images of keeping the humans away from technology. This is not what Catherine meant. She was saying, conversely, to keep the technology away from the humans (and their sensitive data and information…). AI does not discriminate. It will take everything -- including highly sensitive data -- and use that information to train itself. Organizations need to think about how they are going to put a fence around the AI while being judicious around what data to feed it. Catherine stressed the need to perform curative work around properly ringfencing and/or deleting data that shouldn’t be consumed by an LLM. This work is just one of the many steps around forward AI governance that will need to be developed, evolved and continuously applied with unfailing rigor.

4. Privacy Policy

Catherine added that Family Offices need to think about the other aspects of managing their data, aside from Access Control. They need to determine what data is off limits: How long should it be kept? Who has access to the data? Do we have a retention policy? What is our privacy policy? She suggested that once you come up with those parameters, memorialize them and make sure they are clearly understood.

In addition to internal data, think about the data your venders have access to as well: What information do our lawyers, accountants, and other service providers have about our organization? Do we want them to have it if they are using AI? Do we want our information training their models? These third parties need to be part of your privacy footprint.

5. Governance

This topic might be the most important area in which we spend significant time thinking about here at Archway. How best to implement and manage AI usage & deployment in your organization? Who should be the architects of your AI policy and in what form should they be structured: cross-functional team members in committee form or do you stand up a board of governance? It can -- and should -- also be manifest in a set of guidelines that everyone agrees to abide by. Catherine said to think of governance as the guardrails and controls for implementing AI throughout your complex. It determines who decides why and where AI is used, what data will be available to the language models, and who has access to results. It can also assess risks and determine if the AI capability is delivering on promised functionality. As our clients consider this question, we continue to manage Archway’s AI discovery and deployment journey with these nuanced decisions in mind.

In conclusion, we’re very grateful to Catherine for taking the time to share her insights with us. I found her five-step plan to be an insightful and effective guide for helping to frame one’s thinking around getting started with their respective AI journey. Here’s to hoping that this will help more people take the plunge and get into the pool.

Anthony Abenante | CEO Archway Group



 

Share This:

Author

Archway Family Office Services

Archway Family Office Services