The key feature of Azure functions is their bindings. Your function code is triggered by an event, and it can receive input or write output to other Azure services using bindings. If you need to read from a CosmosDB or write to a Service Bus queue you can do it without any of the infrastructure code.
In this lab we’ll use a timer trigger for a function which writes output to blob storage. We’ll also use the Azure Storage emulator Azurite, and see how to set configuration with funcions.
The function code is in the TimerToBlob
directory:
This is standard C# code with all the functions features enlisted with attributes:
[FunctionName]
identifies the method as a function and names it[StorageAccount]
specifies the name of the connection string setting for the Storage Account[TimerTrigger]
sets the function to run on a timer using a modified cron syntax[Blob]
is an output binding which will create a blob file with the content set in the functionFor reference</summary>
Here’s how the function was created:
func init TimerToBlob --dotnet
cd TimerToBlob
func templates list
func new --name Heartbeat --template "Timer trigger"
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Blobs
</details>
The code is very simple. We don’t need to write any logic to trigger the event, that’s all take care of in the timer. We don’t need to write any code to connect to a Blob Storage container and upload a file - the binding does all that.
Functions which have dependencies can have configuration values set in a file called local.settings.json
- this is only used when you run the function locally.
The file is deliberately excluded from the git repo as it could contain sensitive connection strings.
Create a file called local.settings.json
in the folder labs/functions/timer/TimerToBlob
with these values:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"HeartbeatOutputStorageConnectionString": "UseDevelopmentStorage=true"
}
}
This sets the storage connection strings to use the local Azure storage emulator. We need to run that and the easiest way is in a container.
Make sure you have Docker Desktop running and start the storage emulator:
docker run -d -p 10000-10002:10000-10002 --name azurite mcr.microsoft.com/azure-storage/azurite
Now you can run the function locally (you may want to change the trigger to run more frequently - e.g. every two minutes is 0 */2 * * * *
)
cd labs/functions/timer/TimerToBlob
func start
You should see output from the function host starting up, and then with the function firing every few minutes.
You can check the container logs to see files are written in blob storage (or you can browse your emulator storage with Storage Explorer):
docker logs azurite
Running functions locally gives you a quick feedback loop and helps you identify all the depenencies and configuration settings you will need when you deploy to Azure.
We’ll start with the standard setup for the Function App:
# remember to use a location near you which supports the consumption plan
az group create -n labs-functions-timer --tags courselabs=azure -l eastus
az storage account create -g labs-functions-timer --sku Standard_LRS -l eastus -n <sa-name>
az functionapp create -g labs-functions-timer --runtime dotnet --functions-version 4 --consumption-plan-location eastus --storage-account <sa-name> -n <function-name>
Now it’s over to you for the rest. The function needs:
HeartbeatOutputStorageConnectionString
When you have everything ready, you can deploy the function from the local folder:
func azure functionapp publish <function-name>
This function produces a (fake) diagnostic check for a system. How would you add a diagnostic check for another system? More code in this function or a separate function? If you had an API which reported the current status for all components, how would it read the data?
Stuck? Try my suggestions.
Stop the Azure Storage emulator:
docker rm -f azurite
Delete the lab RG:
az group delete -y --no-wait -n labs-functions-timer