Forward logs from Google Cloud Platform to Better Stack.
You will need a Pub/Sub Subscription which you can find or create in Google Cloud Console → Pub/Sub → Subscriptions.
Note the Subscription name for future use, in the format projects/<YOUR_PROJECT>/subscriptions/<YOUR_SUBSCRIPTION>.
betterstack/pubsub-to-betterstack.jsonInput Pub/Sub Subscription: projects/$PROJECT/subscriptions/$NAME
Better Stack Source Token: $SOURCE_TOKEN
Better Stack Ingesting Host: $INGESTING_HOST
Then, click Run job 🚀
You should see your logs in Better Stack → Logs & traces.
Please note it may take a few minutes for the job to initialize.
If you prefer using gcloud CLI, you can run the job using the following command in your active project and region:
PROJECT="$(gcloud config get-value project)"
REGION="$(gcloud config get-value compute/region)"
SUBSCRIPTION="projects/$PROJECT/subscriptions/<YOUR_SUBSCRIPTION_NAME>"
gcloud dataflow flex-template \
run "pubsub-to-betterstack-$(date +%Y%m%d-%H%M%S)" \
--template-file-gcs-location="gs://betterstack/pubsub-to-betterstack.json" \
--parameters input_subscription="$SUBSCRIPTION" \
--parameters better_stack_source_token="$SOURCE_TOKEN" \
--parameters better_stack_ingesting_host="$INGESTING_HOST" \
--region="$REGION"
You should see your logs in Better Stack → Live tail.
Please note it may take a few minutes for the job to initialize.
Please let us know at hello@betterstack.com.
We're happy to help! 🙏
When not specified, the streaming job uses the default machine type for Google Compute Engine instances used in your pipeline execution. For example, n1-standard-1.
You can customize the used machine type in Optional Parameters section while creating the job. Uncheck the Use default machine type and pick a different type. Alternatively, you can use --worker-machine-type CLI parameter.
You can also customize the autoscaling options or the used zone for running your Dataflow job.
You can read more on Pricing in official Google Cloud docs.
By default, the job is created in the default network.
This may lead to errors while initializing the job, such as:
Failed to start the VM used for launching because of status code: INVALID_ARGUMENT,
reason: Invalid Error: Message: Invalid value for field
‘resource.networkInterfaces[0].network’: ‘global/networks/default’.
The referenced network resource cannot be found.
You can customize the used network or subnetwork in Optional Parameters section while creating the job. Alternatively, if you're using the CLI you can use parameters --network or --subnetwork.
The Better Stack ingesting host must be routable from the VPC that the job runs in.
You can read more about how to Specify a network and subnetwork in Dataflow official docs.
Supply the following option in the Optional Parameters section in Web UI, or via --parameters in the CLI command.
batch_size - Number of messages to batch before sending. Default: 100window_size - Window size in seconds for batching messages. Default: 10max_retries - Maximum number of retry attempts for failed requests. Uses exponential backoff between retries. Default: 3initial_retry_delay - Initial delay in seconds between retries. The delay doubles with each retry attempt. Default: 1You can also fork the open-source repository on Github and fully customize the template.
We use cookies to authenticate users, improve the product user experience, and for personalized ads. Learn more.