Since Hi Azure NSG Flow Log triggered - event-driven log injection

 

        Last time we made a presentation on the overall architecture NSG Flow Log scheme, we can refer to the following schema diagram, quick recall. In this paper we mainly focus on event-driven log injection parts, namely architecture diagram of the first to the third step in the process.

         NSG Flow Log currently supported Export way only persisted to the Blob storage, using a Block Blob type, NSG Flow Log in 1-minute intervals additional Block to Block Blob log files, all log files are vested in the Storage Account in named insights-logs-networksecuritygroupflowevent Blob Container, Blob log file naming convention is as follows: resourceId = / SUBSCRIPTIONS / {subscriptionID} / rESOURCEGROUPS / {resourceGroupName} /PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/ {nsgName} / y = { year} / m = {month} / d = {day} / h = {hour} / m = 00 / macAddress = {macAddress} /PT1H.json, every hour Blob will have a separate log file, because the same NSG policy can be enabled on multiple virtual machines, so we can see the Mac address macAddress this field is used to identify the virtual machine on the network card Blob naming rules. Blob storage in the log fluidized achieve a very important goal which is real-time, see above Log minimum update period Blob storage of 1 minute, event-driven way to deal with to meet real-time requirements, when Blob generate a new file or an existing file is updated, the process flow subsequent to the log drive with this event to ensure that the latest logs are injected into the back-end analysis engine. The Azure Event Grid native support for Blob storage as an event source to generate a new file or an existing file is updated, Event Grid can be released when the Blob in the event message. The event messages in the consumer, we need to know the name of the event to trigger Blob, and add the offset of the event logs generated. Let's look Schema Blob change events:

[{
  "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount",
  "subject": "/blobServices/default/containers/testcontainer/blobs/testfile.txt",
  "eventType": "Microsoft.Storage.BlobCreated",
  "eventTime": "2017-06-26T18:41:00.9584103Z",
  "id": "831e1650-001e-001b-66ab-eeb76e069631",
  "data": {
    "api": "PutBlockList",
    "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
    "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
    "eTag": "0x8D4BCC2E4835CD0",
    "contentType": "text/plain",
    "contentLength": 524288,
    "blobType": "BlockBlob",
    "url": "https://example.blob.core.windows.net/testcontainer/testfile.txt",
    "sequencer": "00000000000004420000000000028963",
    "storageDiagnostics": {
      "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
    }
  },
  "dataVersion": "",
  "metadataVersion": "1"
}]

        url attribute data which identifies the portion of the triggering event name Blob, contentLength attribute identifies the current total byte size Blob file, the new offset can be calculated by the log difference between the two before and after the event. Offset calculating required every time dependence of a preamble event event processing, it is necessary to record a persistent storage file preamble Blob same state, the Blob contentLength file name and recorded, which is a typical persistence KV scene, we chose to implement Azure Table Storage, CosmosDB also a good choice. It has been introduced to achieve clear, with the view of the configuration:

1. Create NSG Flow Log storage account,

 

2. Open the NSG Flow Log, Storage Account pointing first step in creating a savings account

 3. 配置 Event Grid 的 Blob 事件触发器(此步也可跳过,参考 Azure Function ETL 部分),

 

        配置完成了, Blob 事件触发就绪,后续就可以开始 NSG Flow Log 的流式处理啦。未完待续...

Guess you like

Origin www.cnblogs.com/wekang/p/10936556.html
log
log