官术网_书友最值得收藏!

How it works...

When implementing a stream processor function, we often need more information than is available in the current event object. It is a best practice when publishing events to include all the relevant data that is available in the publishing context so that each event represents a micro snapshot of the system at the time of publishing. When this data is not enough, we need to retrieve more data; however, in cloud-native systems, we strive to eliminate all synchronous inter-service communication because it reduces the autonomy of the services. Instead, we create a micro event store that is tailored to the needs of the specific service.

First, we implement a listener function and filter for the desired events from the stream. Each event is stored in a DynamoDB table. You can store the entire event or just the information that is needed. When storing these events, we need to collate related events by carefully defining the HASH and RANGE keys. For example, we might want to collate all events for a specific domain object ID or all events from a specific user ID. In this example, we use event.partitionKey as the hash key, but you can calculate the hash key from any of the available data. For the range key, we need a value that is unique within the hash key. The event.id is a good choice if it is implemented with a V1 UUID because they are time-based. The Kinesis sequence number is another good choice. The event.timestamp is another alternative, but there could be a potential that events are created at the exact same time within a hash key.

The trigger function, which is attached to the DynamoDB stream, takes over after the listener has saved an event. The trigger calls getMicroEventStore to retrieve the micro event store based on the hash key calculated for the current event. At this point, the stream processor has all the relevant data available in memory. The events in the micro event store are in historical order, based on the value used for the range key. The stream processor can use this data however it sees fit to implement its business logic.

Use the DynamoDB TTL feature to keep the micro event store from growing unbounded.
主站蜘蛛池模板: 平阴县| 始兴县| 合肥市| 宜川县| 平潭县| 龙游县| 江油市| 喜德县| 尖扎县| 紫云| 云龙县| 博兴县| 甘洛县| 和龙市| 安宁市| 聊城市| 砀山县| 禄劝| 镇江市| 睢宁县| 广元市| 正定县| 广河县| 镶黄旗| 通州区| 漠河县| 丹凤县| 锦州市| 仁怀市| 新安县| 泗阳县| 江北区| 宣威市| 东辽县| 阳泉市| 黎川县| 胶南市| 丹棱县| 集贤县| 克东县| 新沂市|