AI Guidance
Organizations across the globe are seeking to rapidly introduce artificial intelligence (AI) tools into their daily operations, but it is pivotal that measured steps are first taken to ensure technical, cybersecurity, and data privacy best practices are followed. This page is intended to provide high-level guidance on what organizations should consider before authorizing and signing up for AI tooling.
Don’t make assumptions about AI
Before jumping into implementation, it is key organizations consider best practices for the technical and administrative controls that should be in place for appropriate use.
Numerous standalone and extension tools are tied to AI systems, including OpenAI (ChatGPT), Anthropic (Claude), and Microsoft CoPilot. The rationale behind the use of these tools is that they are fast, relatively inexpensive, and allow for greater scaling of all the proposed actions. Proposed benefits are quite significant, broaching activities such as:
- Cross-mapping of disparate data sets
- Rapid legal document review and summary
- Automating recurring administrative tasks
- Generating code for programs and applications
Risks Connected to AI Usage
While not exhaustive, this list includes the most critical risks present in the current landscape of AI usage
Unauthorized or Unintentional Data Exposure
AI usage can expose organizations to confidential data leaks to outside parties or unauthorized internal parties if appropriate protections are not in place. Without deauthorization, your data may be used to train AI models, leading to potential leakage outside your organization. Further, prompt injection techniques may be used to forcibly expose your data connected to these AI systems.
Regulatory, Legal, and Compliance Risks
The use of AI is growing in regulatory oversight. The EU AI Act is strictly enforced for all manner of AI usage and processing, and carries penalties akin to those under the GDPR. Similarly, personal data used in AI systems may need to be communicated to individuals as part of required privacy notices. Leveraging AI without prior authorization may also put organizations at risk of violating existing contracts which require confidentiality of shared data.
Hallucinations
AI is an imperfect tool and reliant on a set of growing, but limited, data references. Asking highly specific questions to which there is no appropriate reference backing may lead the AI solutions to generate arbitrary and incorrect answers.
Insecure Code Development
In 2025, OWASP identified inappropriate trust in AI generated code (colloquially called “vibe coding”) as introducing significant security risk in newly developed applications and programs. For internally used programs, the risk may be lower, but more public-facing or highly sensitive applications developed using this method are inherently very dangerous.
Undiscovered Risks
Just as there are newly discovered security exploits made every day, so too are new risks stemming from AI systems. It is important to always remain informed, understand the risk posture of the solution you are seeking to implement, and weigh those findings against the risk appetite for your organization.
Countering AI-Centric Risks
In any system, operation, or action, there is no such thing as no risk – there are always inherent risks (risks present from the outset) and actual/residual risks (risks remaining following steps taken to mitigate the risk). With this mindset top of mind, there are a series of recommended practices that your organization should adopt before implementing any new AI system. These recommendations are not to be considered “one size fits all” and are broad strokes for policy coverage.
Technical Controls
Technical controls will range in ease of implementation depending on the degree to which you are mitigating risks and the level of licensing used for an AI tool. The following are broad recommendations for technical policies to implement and are vendor-agnostic.
- Disable the ability of the AI tool to train on your data
- Minimize retention periods of data or disable “memories” outright
- Enable SSO/SAML authentication where possible
- Enable audit logging within the solution – ideally connect them to your Security Information and Event Management (SIEM) system
- Disable or highly restrict any plugins, connectors or integrations with other systems (e.g., your Microsoft environment, your cloud storage solution)
- Enforce domain restrictions to only authorize your organization’s domain within the tenant
- Set automatic log-off for idle sessions, ideally within 15 minutes (this is different from the automatic device log-off)
- Disable external sharing of AI conversations outside of the organization
- Restrict user creation of API keys
- Follow the principles of least privilege and role-based access controls when assigning licenses and permissions
Under no circumstances should AI tools be authorized without organization-sponsored licensing. Personal licenses (both free and advanced licenses) within an organization should be prohibited in all cases
It is important to perform due diligence and review documentation to ensure your organization has the appropriate licensing to enact the listed controls.
Administrative Controls
Administrative controls should not simply be policies to allow or disallow the use of AI but comprehensive standards to properly identify and address the risks of AI usage.
Before seeking to implement an AI solution, it is important to set very clear criteria and inputs for your overall operations. These include:
- A defined data classification schema to understand what data you have in tow and the sensitivity of such data
- Risk management considerations to identify, document, and address risks that may arise from the use of AI systems, with these considerations being proactive and ongoing
- Defined governance structure to determine who owns and is responsible for enforcement of all security controls related to AI
- Defined approved use cases for AI solutions and what data is authorized for those use cases
- Due diligence on the interested solutions, both before onboarding and on an ongoing basis to identify incidents and security failures for the tool’s infrastructure
- Identify regulatory, legal, and compliance risks tied to the implementation of an AI solution
- Continuously monitor for potential incidents stemming from an organization’s use of AI as well as non-compliance with organizational policies.
- Create or amend Secure Software Development Lifecycle (SSDLC) practices to reflect the accepted uses of AI for coding/programming
- Train staff in acceptable practices and ensure they understand their responsibilities related to an AI system
This list may seem long, but it can realistically be incorporated into existing Security Policies like a Written Information Security Plan (WISP). The more onerous task for organizations is ensuring enforcement of these policies and procedures, since those deficiencies may come up during an audit.
A Note on Data Loss Prevention Tooling
DLP tools like Microsoft Purview and Egnyte are becoming increasingly popular in the IT security landscape. These tools help prevent data exposure by automatically labeling data and setting access controls to prevent unauthorized exposure to both external and internal parties.
If you are seeking to leverage an AI tool to process, review, and understand your confidential or sensitive data, even if it is internally facing data, it is of utmost importance that you integrate a DLP solution with your authorized AI tool.
Practical Next Steps
It may seem daunting to consider the steps that must first be taken before safely implementing AI. To help your organization formulate an action plan, consider the following suggested next steps for your AI journey:
- Define an owner or group to oversee AI implementation and usage.
- Define your organization’s acceptable use cases for AI and build a set of policies and procedures around this.
- Coordinate with a trusted partner on defining a roadmap for AI security, starting with implementing or refining a DLP solution as necessary.
- Perform appropriate due diligence on any prospective AI solutions that meet your organization’s use cases, including evaluation of control options by license level
- Coordinate with a trusted partner on implementing the AI solution and setting up integrations securely (e.g., SSO, MCP Servers)
- Develop an internal process to periodically check the performance and compliance of the AI solution.
Securely reap the benefits of AI services
Organizations have to consider all factors before electing to sign on to an AI service and be empowered to enforce control guardrails through the entire lifecycle of an AI solution. If you have further questions or want to guidance on how your organization can securely and responsibly implement an AI solution, connect with our in-house experts.
Frequently Asked Questions
If your organization has the following fully implemented, start a conversation with your internal compliance and IT team or your trusted IT partner [CS1] [CS2] to see if you are ready to utilize AI internally:
- DLP is deployed, with all data being labeled in accordance to the business’ policy AND protections are in place limiting users from uploading impactful data, such as Confidential and Sensitive data, externally, including AI.
- You have clearly defined policies and procedures in place regarding oversight of AI solutions with specified responsible owners of these practices within your organization. All users have reviewed and acknowledged all relevant policies.
- You have resources (both human and technical) to ensure appropriate monitoring and review of systems on a regular cadence.
- You have performed due diligence against your AI vendor(s) and have confirmed the vendor meets your organization’s requirements.
- You have basic identity protection services deployed, including Multifactor Authentication, SSO, and authentication policies to restrict user login.
- You have basic endpoint security, including EDR with MDR, DNS Filtering, a patching program, and otherwise.
This depends on your organization’s security posture and stance. All businesses are in different stages of their AI journey. Work with your internal compliance and IT team or your trusted IT partner to discuss when AI is right for your business.
It is likely unfeasible to immediately cease all such operations; therefore, we recommend the following steps:
- Immediately perform a high-level risk assessment to identify the following:
- What AI solutions are being used
- How are the solutions being used
- What data might be getting used with the AI solutions
- Whether the solutions are licensed appropriately
- Estimation of how many people are using the solutions
- Once understood, it is recommended to use this information to implement initial guardrails for AI use via an AI-specific Acceptable Use Policy that minimizes use as much as feasible.
- Following the development of the policy, it should be disseminated and acknowledged by staff.
- From there, a more strategic conversation should be had with Senior Management and relevant external consultants to determine the path forward.
Understand that, even with the above measures, your organization may be at risk with your AI utilization until appropriate security measures are in place, dependent on your utilization of AI.
Abacus cannot recommend a specific AI tool to use and would always recommend reviewing a vendor before signing up for any services. Before signing up for an AI tool, perform Due Diligence to ensure the vendor meets your security and compliance requirements. Work with your internal compliance and IT team or your trusted IT partner, to discuss your requirements further.
Often times, the effectiveness of an AI tool is heavily driven by how users interact with the AI, including their prompts, and the context provided to the AI. While another AI tool may perform better for your use cases, training end users on proper usage of the service may improve your output and experience. Work with your internal compliance and IT team or your trusted IT partner to discuss AI Prompt Training and such further.
As with all AI services, organizations should thoroughly evaluate every vendor and service before implementing to fully understand the potential risk and impact of an AI tool. With the recent news, while Abacus does not have any immediate recommendations regarding any mainstream AI tool. We would recommend discussing these concerns with your internal compliance and IT team or your trusted IT partner for guidance. Some organizations, depending on industry, business, and relationships, may need additional considerations or precautions before implementing these tools.
Deploying an in-house LLM/AI Chat bot is a very advanced process, that requires additional security and compliance considerations, as compared to utilizing a SaaS AI Provider, and is often very costly. Abacus strongly recommends reviewing the organization’s requirements, resources, and ability to manage this service before exploring this further. When deploying an in-house AI bot, Abacus strongly recommends performing an AI Penetration Test as well as additional assessments to ensure the system is setup securely before moving to production.
Using AI to code without any coding experience is colloquially known as Vibe Coding. With all AI tools, a user should never ask an AI to do a task that the user cannot do themselves.
AI is not infallible and is prone to making mistakes, or ‘Hallucinate,’ meaning making up information. A human should always review and validate the output of an AI before utilizing it, including everything from reports to application code. Without this human oversight, an AI may generate inaccurate, insecure, or otherwise dangerous outputs, especially concerning for code, that are extremely damaging to a business.
Vibe coding can be especially prone to risk if not done safely. Talk with your internal compliance and IT team or your trusted IT partner about establishing a Software Development Lifecycle (SDLC) around vibe coding, to enable users to create applications safely.
Read more about an example of the pit falls of coding with AI without coding experience: Vibe coded applications full of security blunders
