Kafka Topic Authorization

Introduction

Kafka streams can have many publishers and subscribers. OPA and build.security can be used to control WHO can read/write WHAT topic.

Let's dive right into it.

Prerequisites

NOTE: Your local environment must have Java 8+ installed.

1. Download Kafka.

Download the latest Kafka version and extract it:

tar -xzf kafka_2.13-2.8.0.tgz
cd kafka_2.13-2.8.0

2. Start the Kafka environment.

Run the following commands to start all services in the proper order:

# Start the ZooKeeper service
# Note: Soon, ZooKeeper will no longer be required by Apache Kafka.
bin/zookeeper-server-start.sh config/zookeeper.properties

3. Plug the external authorization plugin into Kafka.

Download the OPA authorizer and put it into the libs folder.

Edit config/server.properties and add the following lines:

authorizer.class.name=com.bisnode.kafka.authorization.OpaAuthorizer
opa.authorizer.url=http://localhost:8181/v1/data/kafka/authz/allow
opa.authorizer.cache.expire.after.seconds=1

This configuration will :

  • instruct the Kafka server to use OPA as an external authorization service for all operations.

  • Provide the external authorization service endpoint address

  • Instruct the authorization plugin to cache the results for 1 second (in production we shall increase this to a larger value)

4. Run PDP with Kafka policy enforced.

Follow the PDP deployment instructions and publish a simple Kafka policy.

Make sure the policy package name is kafka.authz

5. Start the Kafka server.

Open another terminal session and run:

# Start the Kafka broker service
bin/kafka-server-start.sh config/server.properties

6. Create a topic.

Run the following command to create a topic named topic1

bin/kafka-topics.sh --create --topic topic1 --bootstrap-server localhost:9092

7. Watch the decision logs.

Already, you should be seeing decision logs in the project as the topic creation operation was authorized.

8. Create a Kafka producer and consumer.

Play with the policy by adding Kafka producer and consumer, watch the decision logs and add more rules to the policy.

On another terminal screen, start a Kafka producer:

bin/kafka-console-producer.sh --topic topic1 --bootstrap-server localhost:9092

On another terminal screen, start a Kafka consumer:

bin/kafka-console-consumer.sh --topic topic1 --bootstrap-server localhost:9092

Usage

Example structure of input data provided from opa-kafka-plugin to Open Policy Agent.

{
"action": {
"logIfAllowed": true,
"logIfDenied": true,
"operation": "DESCRIBE",
"resourcePattern": {
"name": "alice-topic",
"patternType": "LITERAL",
"resourceType": "TOPIC",
"unknown": false
},
"resourceReferenceCount": 1
},
"requestContext": {
"clientAddress": "192.168.64.1",
"clientInformation": {
"softwareName": "unknown",
"softwareVersion": "unknown"
},
"connectionId": "192.168.64.4:9092-192.168.64.1:58864-0",
"header": {
"data": {
"clientId": "rdkafka",
"correlationId": 5,
"requestApiKey": 3,
"requestApiVersion": 2
},
"headerVersion": 1
},
"listenerName": "SASL_PLAINTEXT",
"principal": {
"name": "alice-consumer",
"principalType": "User"
},
"securityProtocol": "SASL_PLAINTEXT"
}
}

More Information

The following table summarizes the supported resource types and operation names.

input.action.resourcePattern.resourceType

input.action.operation

CLUSTER

CLUSTER_ACTION

CLUSTER

CREATE

CLUSTER

DESCRIBE

GROUP

READ

GROUP

DESCRIPTION

TOPIC

ALTER

TOPIC

DELETE

TOPIC

DESCRIBE

TOPIC

READ

TOPIC

WRITE