Kafka Topic Authorization

Introduction

Kafka streams can have many publishers and subscribers. OPA and build.security can be used to control WHO can read/write WHAT topic.
Let's dive right into it.

Prerequisites

NOTE: Your local environment must have Java 8+ installed.

1. Download Kafka.

Download the latest Kafka version and extract it:
1
tar -xzf kafka_2.13-2.8.0.tgz
2
cd kafka_2.13-2.8.0
Copied!

2. Start the Kafka environment.

Run the following commands to start all services in the proper order:
1
# Start the ZooKeeper service
2
# Note: Soon, ZooKeeper will no longer be required by Apache Kafka.
3
bin/zookeeper-server-start.sh config/zookeeper.properties
Copied!

3. Plug the external authorization plugin into Kafka.

Download the OPA authorizer and put it into the libs folder.
Edit config/server.properties and add the following lines:
1
authorizer.class.name=com.bisnode.kafka.authorization.OpaAuthorizer
2
opa.authorizer.url=http://localhost:8181/v1/data/kafka/authz/allow
3
opa.authorizer.cache.expire.after.seconds=1
Copied!
This configuration will :
  • instruct the Kafka server to use OPA as an external authorization service for all operations.
  • Provide the external authorization service endpoint address
  • Instruct the authorization plugin to cache the results for 1 second (in production we shall increase this to a larger value)

4. Run PDP with Kafka policy enforced.

Follow the PDP deployment instructions and publish a simple Kafka policy.
Make sure the policy package name is kafka.authz

5. Start the Kafka server.

Open another terminal session and run:
1
# Start the Kafka broker service
2
bin/kafka-server-start.sh config/server.properties
Copied!

6. Create a topic.

Run the following command to create a topic named topic1
1
bin/kafka-topics.sh --create --topic topic1 --bootstrap-server localhost:9092
Copied!

7. Watch the decision logs.

Already, you should be seeing decision logs in the project as the topic creation operation was authorized.

8. Create a Kafka producer and consumer.

Play with the policy by adding Kafka producer and consumer, watch the decision logs and add more rules to the policy.
On another terminal screen, start a Kafka producer:
1
bin/kafka-console-producer.sh --topic topic1 --bootstrap-server localhost:9092
Copied!
On another terminal screen, start a Kafka consumer:
1
bin/kafka-console-consumer.sh --topic topic1 --bootstrap-server localhost:9092
Copied!

Usage

Example structure of input data provided from opa-kafka-plugin to Open Policy Agent.
1
{
2
"action": {
3
"logIfAllowed": true,
4
"logIfDenied": true,
5
"operation": "DESCRIBE",
6
"resourcePattern": {
7
"name": "alice-topic",
8
"patternType": "LITERAL",
9
"resourceType": "TOPIC",
10
"unknown": false
11
},
12
"resourceReferenceCount": 1
13
},
14
"requestContext": {
15
"clientAddress": "192.168.64.1",
16
"clientInformation": {
17
"softwareName": "unknown",
18
"softwareVersion": "unknown"
19
},
20
"connectionId": "192.168.64.4:9092-192.168.64.1:58864-0",
21
"header": {
22
"data": {
23
"clientId": "rdkafka",
24
"correlationId": 5,
25
"requestApiKey": 3,
26
"requestApiVersion": 2
27
},
28
"headerVersion": 1
29
},
30
"listenerName": "SASL_PLAINTEXT",
31
"principal": {
32
"name": "alice-consumer",
33
"principalType": "User"
34
},
35
"securityProtocol": "SASL_PLAINTEXT"
36
}
37
}
Copied!

More Information

The following table summarizes the supported resource types and operation names.
input.action.resourcePattern.resourceType
input.action.operation
CLUSTER
CLUSTER_ACTION
CLUSTER
CREATE
CLUSTER
DESCRIBE
GROUP
READ
GROUP
DESCRIPTION
TOPIC
ALTER
TOPIC
DELETE
TOPIC
DESCRIBE
TOPIC
READ
TOPIC
WRITE
Last modified 7mo ago