Quilt Blog

A.I. is Just Compute: Governance for LLMs is easier than you think (with AWS Bedrock examples)

Written by Aneesh Karve | September 18, 2024

If you’re worried about security and controls for “AI” in your organization, you might be further along than you think. Although generative AI offers novel capabilities in the form of conversation and reasoning, AI services are ultimately compute services. Just like any compute service that you are already familiar with (e.g. EC2, BigQuery, or Databricks), generative AI models consume data as input and emit data as output.

We begin with key privacy considerations for business leaders, then introduce data perimeters for architects, demonstrate how to construct a data perimeter with IAM, and conclude with a simplified model to demystify security for generative AI so that business leaders and architects can conceptualize AI security as they roll out large language models (LLMs).

Can third parties access my data?

We all use the Internet but less often do we consider the distinction between public and virtual private cloud services. Public cloud services, although they encrypt traffic in both directions, are nevertheless reachable by anyone on the open Internet. That does not mean that anyone can read the data that you send to services like perplexity.ai and chatgpt.com, but it does mean that anyone could attack these services. Furthermore, public cloud services tend to be multi-tenant (there are many users of the same infrastructure) and provide little to no control over data residency (where data are stored), or how data are stored at rest. In fact the data storage policies of a public cloud service are ultimately a matter of faith in the provider and its certifications since the service hides the native storage layer from its users.

Public cloud services, in brief, expose a broader attack surface, require greater trust in third parties, and may even include onerous terms of service that enable the provider to access your data. For instance, Adobe recently updated its license provisions to include the following 😳:

2.2 Our Access to Your Content. We may access, view, or listen to your Content through both automated and manual methods, but only in limited ways, and only as permitted by law. [ATL-ADB]

In contrast to the public cloud, virtual private cloud services are offered on private Internet Protocol addresses, and require less trust in third parties since you directly control how data rests and transits. We therefore recommend virtual private cloud models as the only viable generative AI services for sensitive business data. A virtual private cloud model ensures that data remain under your exclusive control through the following mechanisms:

In the next section we explain how you can apply a data perimeter around your models and the data that they traffic in order to ensure that both remain under your exclusive control.

Enter the data perimeter (so no one else can)

A data perimeter is a set of preventive guardrails that help to ensure only trusted identities access trusted resources from expected networks [AWS-DP].

You’ll notice that a data perimeter considers who can access data (Identity), which data and compute they can access (Resources), and where they can access it from (Network). You use your cloud provider’s Identity and Access Management (IAM) controls to configure Identity, Resources, and Network policies into a data perimeter.

Identity & Resources

For instance, suppose you wish to control which identities can access a specific foundation model in Amazon Bedrock. Although Bedrock currently does not support resource policies, you can nevertheless attach a policy to the roles of your intended users.


    {
        "Version": "2012-10-17",
        "Statement": {
            "Sid": "AllowInference",
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream"
            ],
            "Resource": "arn:aws:bedrock:*::foundation-model/model-id"
        }
    }
    

 

If you further wish to restrict the principals and networks that can access a given model, you can use a VPC endpoint. The VPC endpoint is only accessible to principals in your VPC and routes traffic over AWS’s private network.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowSpecificPrincipals",
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam::123456789012:role/Bob",
              "arn:aws:iam::123456789012:user/Alice"
            ]
          },
          "Action": "*",
          "Resource": "*"
        }
      ]
    }

 

Network

Known networks include VPCs and VPNs. Note that this is orthogonal to identity. In thinking about network security, we think only about the position of our principals in the network topology. You may choose to impose controls on properties such as "aws:SourceVpce" , "aws:SourceIp", and"aws:PrincipalIsAWSService". For details see Quilt’s documentation on private endpoints.

Keeping people (and machines) away from sensitive data

Guardrails are like policy filters that restrict model inputs and outputs. Consider Sensitive information filters to screen personally identifiable information (PII) from user prompts and model responses. For example, suppose your users interact with an agent to summarize sensitive data that contains social security numbers and bank account numbers. You can anonymize the former and block the latter as follows with Guardrails in AWS CloudFormation.

    Resources:
      BedrockGuardrail:
        Type: AWS::Bedrock::Guardrail
        Properties:
          GuardrailName: SensitiveInformationGuardrail
          SensitiveInformationPolicyConfig:
            PiiEntitiesConfig:
              - Action: ANONYMIZE
                Type: US_SOCIAL_SECURITY_NUMBER
              - Action: BLOCK
                Type: BANK_ACCOUNT_NUMBER   

From the standpoint of a data perimeter there’s nothing mystical about generative AI: data goes into a compute service and data comes out. As a result, you can view AI as a special case of the compute security that you likely already have in place for pipelines and services. You can further apply guardrails as content and security filters to expurgate personally identifiable information.

To centralize security and access management across accounts and organizations, consider AWS services like Control Tower and Service Control Policies (SCPs). SCPs enable you to dictate permission at scale, without having to author individual roles and policies.

References