<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Linux on Azure]]></title><description><![CDATA[about Linux and Open Source on Azure]]></description><link>https://lnx.azurewebsites.net/</link><generator>Ghost 0.11</generator><lastBuildDate>Thu, 16 Apr 2026 23:05:01 GMT</lastBuildDate><atom:link href="https://lnx.azurewebsites.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Ubuntu new Offers and SKUs for Azure VMs]]></title><description><![CDATA[<p>Some of you have noticed that Offers and SKUs for Ubuntu images in Azure have changed recently. The brand new key is used for Offers naming - the OS family is narrow (earlier there was an UbuntuServer):</p>

<pre><code>az vm image list-offers \  
--publisher Canonical \
--location westeurope \
-o table

Location    Name  
----------</code></pre>]]></description><link>https://lnx.azurewebsites.net/ubuntu-new-offers-and-skus-for-azure-vms/</link><guid isPermaLink="false">34aae3d8-ba07-4746-ab37-49fb755824d2</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[VM]]></category><category><![CDATA[Images]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Tue, 19 Jan 2021 12:45:36 GMT</pubDate><content:encoded><![CDATA[<p>Some of you have noticed that Offers and SKUs for Ubuntu images in Azure have changed recently. The brand new key is used for Offers naming - the OS family is narrow (earlier there was an UbuntuServer):</p>

<pre><code>az vm image list-offers \  
--publisher Canonical \
--location westeurope \
-o table

Location    Name  
----------  -------------------------------------------
westeurope  0001-com-ubuntu-minimal-focal-daily  
westeurope  0001-com-ubuntu-minimal-groovy-daily  
westeurope  0001-com-ubuntu-minimal-hirsute-daily  
westeurope  0001-com-ubuntu-pro-advanced-sla  
westeurope  0001-com-ubuntu-pro-advanced-sla-att  
westeurope  0001-com-ubuntu-pro-advanced-sla-nestle  
westeurope  0001-com-ubuntu-pro-advanced-sla-servicenow  
westeurope  0001-com-ubuntu-pro-advanced-sla-shell  
westeurope  0001-com-ubuntu-pro-bionic  
westeurope  0001-com-ubuntu-pro-bionic-fips  
westeurope  0001-com-ubuntu-pro-focal  
westeurope  0001-com-ubuntu-pro-hidden-msft-fips  
westeurope  0001-com-ubuntu-pro-trusty  
westeurope  0001-com-ubuntu-pro-xenial  
westeurope  0001-com-ubuntu-pro-xenial-fips  
westeurope  0001-com-ubuntu-server-eoan  
westeurope  0001-com-ubuntu-server-focal  
westeurope  0001-com-ubuntu-server-focal-daily  
westeurope  0001-com-ubuntu-server-groovy  
westeurope  0001-com-ubuntu-server-groovy-daily  
westeurope  0001-com-ubuntu-server-hirsute-daily  
westeurope  0002-com-ubuntu-minimal-bionic-daily  
westeurope  0002-com-ubuntu-minimal-disco-daily  
westeurope  0002-com-ubuntu-minimal-focal-daily  
westeurope  0002-com-ubuntu-minimal-xenial-daily  
westeurope  0003-com-ubuntu-minimal-eoan-daily  
westeurope  0003-com-ubuntu-server-trusted-vm  
westeurope  test-ubuntu-premium-offer-0002  
westeurope  Ubuntu15.04Snappy  
westeurope  Ubuntu15.04SnappyDocker  
westeurope  UbunturollingSnappy  
westeurope  UbuntuServer  
westeurope  Ubuntu_Core  
</code></pre>

<p>Thanks for that, new SKUs lists are short:</p>

<pre><code>az vm image list-skus \  
--publisher Canonical \
--offer 0001-com-ubuntu-server-focal-daily \
--location westeurope \
-o table

Location    Name  
----------  --------------------
westeurope  20_04-daily-lts  
westeurope  20_04-daily-lts-gen2  
</code></pre>

<p>At the end we have a much shorter list of images:</p>

<pre><code>az vm image list --all \  
--publisher Canonical \
--offer 0001-com-ubuntu-server-focal-daily \
--sku 20_04-daily-lts \
--location westeurope \
-o table

Offer                               Publisher    Sku                   Urn                                                                                Version  
----------------------------------  -----------  --------------------  ---------------------------------------------------------------------------------  ---------------
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202012100       20.04.202012100  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202012110       20.04.202012110  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202101050       20.04.202101050  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202101060       20.04.202101060  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202101120       20.04.202101120  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202101140       20.04.202101140  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts       Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:20.04.202101180       20.04.202101180  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202011260  20.04.202011260  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202012010  20.04.202012010  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202012100  20.04.202012100  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202012110  20.04.202012110  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202101050  20.04.202101050  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202101060  20.04.202101060  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202101120  20.04.202101120  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202101140  20.04.202101140  
0001-com-ubuntu-server-focal-daily  Canonical    20_04-daily-lts-gen2  Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts-gen2:20.04.202101180  20.04.202101180  
</code></pre>

<p>If you are wondering what is the root cause of this change, I have an answer for you:</p>

<blockquote>
  <p>We need to separate out our different releases into different offers, hence having them all distinct and then the numbering is because you can't delete or fully replace them
  so if publishing is stuck on 1 vm image in 1 listing, you cannot update any other vm image in that listing.</p>
</blockquote>]]></content:encoded></item><item><title><![CDATA[How to mount Azure Data Lake Storage Gen2 in Linux]]></title><description><![CDATA[<p>Sometimes it's needed to fit a new brick into the old wall. For me it was a need to use an incredibly old Pentaho ETL with a brand-new Azure Data Lake Storage Gen2 without changing any pipeline. The old storage was based on SFTP and mounted in a local filesystem</p>]]></description><link>https://lnx.azurewebsites.net/how-to-mount-azure-data-lake-storage-gen2-in-linux/</link><guid isPermaLink="false">f887c53d-0660-4155-8f02-a2348fcbd1c5</guid><category><![CDATA[Azure]]></category><category><![CDATA[Azure Data Lake]]></category><category><![CDATA[HDFS]]></category><category><![CDATA[Hadoop]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Tue, 25 Aug 2020 13:26:46 GMT</pubDate><content:encoded><![CDATA[<p>Sometimes it's needed to fit a new brick into the old wall. For me it was a need to use an incredibly old Pentaho ETL with a brand-new Azure Data Lake Storage Gen2 without changing any pipeline. The old storage was based on SFTP and mounted in a local filesystem on the ETL machine. The machine is a CentOS 6.5 without an option to upgrade (for some reason - not really matters what). Of course, the solution you will see below will also work for newer OSes - you just need to change the repos.</p>

<h2 id="installclouderacdh">Install Cloudera CDH</h2>

<blockquote>
  <p>CDH (Cloudera Distribution Hadoop) is an open-source Apache Hadoop distribution provided by Cloudera Inc which is a Palo Alto-based American enterprise software company. CDH (Cloudera's Distribution Including Apache Hadoop) is the most complete, tested, and widely deployed distribution of Apache Hadoop.</p>
</blockquote>

<script src="https://gist.github.com/smereczynski/b10a31d63461015525b51b47b1573b8a.js"></script>

<h2 id="configurehdfs">Configure HDFS</h2>

<p>Edit <code>/etc/hadoop/conf/core-site.xml</code> config file using editor you like, filling-up <code>{{AAD_tenant_ID}}</code>, <code>{{client_id}}</code> and <code>{{client_secret}}</code> variables (Service Principal credentials) in the template below:</p>

<script src="https://gist.github.com/smereczynski/32e340ed71067877187c548c6c9dd992.js"></script>

<h2 id="mounthdfsendpoint">Mount HDFS endpoint</h2>

<p>Now you can mount your ADLS Gen2 HDFS endpoint in your filesystem filling-up <code>{{storage_account}}</code>, <code>{{container/fs}}</code> and <code>{{mount_point}}</code> variables in command template below:</p>

<p><code>hadoop-fuse-dfs abfss://{{storage_account}}@{{container/fs}}.dfs.core.windows.net /{{mount_point}}</code></p>

<p>You (probably) want CentOS to mount your ADLS Gen2 HDFS endpoint on every startup. You can do it using <code>/etc/fstab</code>:</p>

<p><code>hadoop-fuse-dfs#abfss://{{storage_account}}@{{container/fs}}.dfs.core.windows.net {{mount_point}} fuse allow_other,usetrash,rw 2 0</code> or similar.</p>

<p>In my case it does not apply because I have few filesystems, I need to mount in each other, so the order of mounting matters. To handle that I added <code>hadoop-fuse-dfs</code> commands in proper order to <code>/etc/rc.local</code> script.</p>

<h2 id="optimizingmountablehdfs">Optimizing Mountable HDFS</h2>

<p>As you can find in CDH documentation:</p>

<ul>
<li>Cloudera recommends that you use the -obig_writes option on kernels later than 2.6.26. This option allows for better performance of writes.</li>
<li>By default, the CDH package installation creates the /etc/default/hadoop-fuse file with a maximum heap size of 128 MB. You might need to change the JVM minimum and maximum heap size for better performance. For example:
<code>export LIBHDFS_OPTS="-Xms64m -Xmx256m"</code>. Be careful not to set the minimum to a higher value than the maximum.</li>
</ul>]]></content:encoded></item><item><title><![CDATA[Azure App Service on Linux (PHP) with Azure Front Door  - access control configuration]]></title><description><![CDATA[<p>If you need to use Azure App Service on Linux with PHP code (like WordPress) behind Azure Front Door, which you should, then you need to secure communication between Azure Front Door and Azure App Service Web App. If you don't want to or you cannot use custom Docker container,</p>]]></description><link>https://lnx.azurewebsites.net/azure-app-service-on-linux-php-with-azure-front-door-access-control-configuration/</link><guid isPermaLink="false">5e3a8084-0e32-426e-b44e-cd90ae4c8618</guid><category><![CDATA[Azure]]></category><category><![CDATA[Azure Front Door]]></category><category><![CDATA[Azure App Service on Linux]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Mon, 04 May 2020 19:51:24 GMT</pubDate><content:encoded><![CDATA[<p>If you need to use Azure App Service on Linux with PHP code (like WordPress) behind Azure Front Door, which you should, then you need to secure communication between Azure Front Door and Azure App Service Web App. If you don't want to or you cannot use custom Docker container, you are pinned to built-in PHP container in App Service on Linux.</p>

<p>In this configuration there is Apache Web Server (httpd) used to serve your app. If you scale out, you have as many Apache Web Servers as many instances in App Service Plan you had configured. At the front you have a load balancer. App Service Instances are NATed. App Services have inbound and outbound IP address pools and DNS name and domain. Azure Front Door is not integrating into App Service - it is totally separated from it. <br>
But you are running Azure Front Door, because it is cool, and you are pointing to Azure App Service as a backend. And you want to service your website only through Azure Front Door, restricting the traffic.</p>

<p>It's easy. You just need to restrict the traffic to the Web App only to traffic originating from Azure Front Door backend addresses. The list of all public IP classess used by Azure services is public and you can find it <a href="https://www.microsoft.com/download/details.aspx?id=56519">here</a>. All you need to do is to find <em>AzureFrontDoor.Backend</em> value on the list of objects, create a JSON definition of <em>ipSecurityRestrictions</em> setting of your Web App. Don't forget to add Azure's basic infrastructure services (through virtualized host IP addresses: 168.63.129.16 and 169.254.169.254) and IPv6 address range, currently limited to 2a01:111:2050::/44.</p>

<script src="https://gist.github.com/smereczynski/87bc556cd18b8c02a849a6096144bb1b.js"></script>

<p>But what if someone runs it's own Azure Front Door and try to route to your Web App? It will work. You can configured it to work. What you want to do is to restrict the access only to YOUR Azure Front Door instance. It's also easy and documented. All you need is to filter request headers for X-Azure-FDID header value and pass only those requests that match your Front Door ID. For PHP running on built-in App Service on Linux image, you can do it at the Apache Web Server layer. You need to create <em>.htaccess</em> file in your application's repository (in the wwwroot directory) and add Apache2 Web Server directive to restrict access only to requests with X-Azure-FDID header pointing to your Azure Front Door instance:</p>

<script src="https://gist.github.com/smereczynski/98f93a481d4587d38bd73e5112c6062f.js"></script>

<p>For other stacks you need to pick proper options - i.e. for Python image you can do it in your app's code or try to do it at Gunicorn layer.</p>

<p>But how to determine Azure Front Door instance ID? It's also easy. What you need to do is to send a GET request to Azure Resource Manager API (version 2020-01-01) for your Azure Front Door instance (by name). In the response you will find a "frontdoorId" property.</p>

<script src="https://gist.github.com/smereczynski/b858a43e293a955e3a19b33c6e8ca0cb.js"></script>]]></content:encoded></item><item><title><![CDATA[Azure Role-Based Access Rootkit]]></title><description><![CDATA[<p>Auditing your Azure environments is an extremely important task. Not only because of external threats but also because of internal threats. One of possible attack vectors is permissions elevation using custom RBAC roles. The problem here is not that it is not possible to audit, the problem is that most</p>]]></description><link>https://lnx.azurewebsites.net/azure-role-based-access-rootkit/</link><guid isPermaLink="false">3b0d40b4-f17f-4e2d-8d4e-08582dc99989</guid><category><![CDATA[Azure Resource Manager]]></category><category><![CDATA[RBAC]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Fri, 18 Jan 2019 17:26:00 GMT</pubDate><content:encoded><![CDATA[<p>Auditing your Azure environments is an extremely important task. Not only because of external threats but also because of internal threats. One of possible attack vectors is permissions elevation using custom RBAC roles. The problem here is not that it is not possible to audit, the problem is that most of Azure administrators are not doing that.</p>

<p>The biggest part of the problem is that Azure is not validating custom roles when creating and we are able to create a role which is almost the same as built-in one.</p>

<p>Let's try with built-in <strong>Reader</strong> role:</p>

<pre><code>{
    "assignableScopes": [
      "/"
    ],
    "description": "Lets you view everything, but not make any changes.",
    "id": "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7",
    "name": "acdd72a7-3385-48ef-bd42-f606fba81ae7",
    "permissions": [
      {
        "actions": [
          "*/read"
        ],
        "dataActions": [],
        "notActions": [],
        "notDataActions": []
      }
    ],
    "roleName": "Reader",
    "roleType": "BuiltInRole",
    "type": "Microsoft.Authorization/roleDefinitions"
}
</code></pre>

<p>We can see here clearly, that user with this role is able to perform <strong>read</strong> action on all resources, the role type is <strong>BuiltInRole</strong> and it's name is <strong>Reader</strong>.</p>

<p>Now let's check our custom role named <strong>Reader</strong>. Yes. The same name on the first look because you did not noticed a one, small space.</p>

<pre><code>{
    "assignableScopes": [
  "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss"
    ],
    "description": "Lets you view everything, but not make any changes.",
    "id": "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss/providers/Microsoft.Authorization/roleDefinitions/54f8ccae-dc88-47f1-b425-174ee843f162",
    "name": "54f8ccae-dc88-47f1-b425-174ee843f162",
    "permissions": [
      {
        "actions": [
          "*"
        ],
        "dataActions": [],
        "notActions": [],
        "notDataActions": []
      }
    ],
    "roleName": "Reader ",
    "roleType": "CustomRole",
    "type": "Microsoft.Authorization/roleDefinitions"
}
</code></pre>

<p>Notice that actions that user with this role can perform is "*" for whole subscription (which is a scope). The name of the role is "Reader " and the description of the role is the same as in built-in "Reader" role.</p>

<p>It shouldn't be so hard to notice that user is having an elevated permissions because it is not a "Reader" role but "Reader "? False.</p>

<p>Let's check the assignemnts list using Azure CLI:</p>

<pre><code>{
    "canDelegate": null,
    "id": "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss/providers/Microsoft.Authorization/roleAssignments/f8f14d8b-27c6-4969-8361-0c76a4e64bda",
    "name": "f8f14d8b-27c6-4969-8361-0c76a4e64bda",
    "principalId": "5a59308f-7c05-4512-9001-d3122d7e22e5",
    "principalName": "tester@free-media.eu",
    "roleDefinitionId": "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss/providers/Microsoft.Authorization/roleDefinitions/54f8ccae-dc88-47f1-b425-174ee843f162",
    "roleDefinitionName": "Reader ",
    "scope": "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss",
    "type": "Microsoft.Authorization/roleAssignments"
}
</code></pre>

<p>Of course, when validating, you will check <strong>roleDefinitionId</strong>? And of course you will validate that the <strong>roleDefinitionName</strong> is "Reader" not "Reader "? You are a PRO, so probably you will...</p>

<p>But how it is possible? It's simple. You just need to create a custom role with the same name as built-in one and add a hidden character, like space, at the end:</p>

<pre><code>{
  "Name": "Reader ",
  "IsCustom": true,
  "Description": "Lets you view everything, but not make any changes.",
  "Actions": [
    "*"
  ],
  "NotActions": [

  ],
  "dataActions": [

  ],
  "notDataActions": [

  ],
  "AssignableScopes": [
    "/subscriptions/ssssssss-ssss-ssss-ssss-ssssssssssss"
  ]
}
</code></pre>

<p>Azure is just not validating it.</p>]]></content:encoded></item><item><title><![CDATA[Saving time with Azure Resource Graph]]></title><description><![CDATA[<p>This year, at the Ignite conference, Microsoft announced Azure Resource Graph service. As we can read in the documentation, Azure Resource Graph is:</p>

<blockquote>
  <p>a service in Azure that is designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across</p></blockquote>]]></description><link>https://lnx.azurewebsites.net/saving-time-with-azure-resource-graph/</link><guid isPermaLink="false">fddb7fa6-df92-48b2-b7a8-f03dd497cdc5</guid><category><![CDATA[Azure Resource Graph]]></category><category><![CDATA[Azure Resource Manager]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Sat, 13 Oct 2018 17:14:00 GMT</pubDate><content:encoded><![CDATA[<p>This year, at the Ignite conference, Microsoft announced Azure Resource Graph service. As we can read in the documentation, Azure Resource Graph is:</p>

<blockquote>
  <p>a service in Azure that is designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across all subscriptions and management groups so that you can effectively govern your environment. These queries provide the following capabilities:</p>
  
  <ul>
  <li>Ability to query resources with complex filtering, grouping, and sorting by resource properties.</li>
  <li>Ability to iteratively explore resources based on governance requirements and convert the resulting expression into a policy definition.</li>
  <li>Ability to assess the impact of applying policies in a vast cloud environment.</li>
  </ul>
</blockquote>

<h2 id="whatisazureresourcegraph">What is Azure Resource Graph?</h2>

<p>Azure Resource Manager sends data to a cache that exposes some information about resource (Resource name, ID, Type, Resource Group, Subscriptions, and Location). Normally we make calls to each resource provider and request these informations for each resource. It means not only much more calls to make but also a need to create a script which will handle that operation.</p>

<p>With Azure Resource Graph, we can access these informations directly, using complex query language we know, the <a href="https://docs.microsoft.com/en-us/azure/kusto/">Kusto query language</a>.</p>

<h1 id="howtouseazureresourcegraph">How to use Azure Resource Graph?</h1>

<p>To use Azure Resource Graph, you need at least Reader (RBAC) role on the resources you want to query.</p>

<p>To query Azure Resource Graph, you can use Azure CLI, PowerShell, SDK or REST API directly.</p>

<p><a href="https://docs.microsoft.com/en-us/azure/governance/resource-graph/samples/starter">Sample queries</a></p>

<h2 id="howmuchtimewearesaving">How much time we are saving?</h2>

<p>In simple words, a lot. Let's consider a simple scenario with just 10 VMs in just one subscription. If we want to summarize Operating Systems we are using Without Azure Resource Graph, we need to call Azure Resource Manager for all the resources with resource type like Microsoft.Compute/virtualMachines and then iterate all VMs for <code>storageProfile.osDisk.osType</code> property. In Azure CLI it will be something like that:</p>

<pre><code>for id in `az resource list --resource-type 'Microsoft.Compute/virtualMachines' --query '[].id' -o tsv`; do az resource show --ids $id --query 'properties.storageProfile.osDisk.osType'; done | uniq -c
</code></pre>

<p>The <code>uniq -c</code> command at the end is summarizing every unique OS type. The output of this script for my subscription is:</p>

<pre><code>3 "Windows"
7 "Linux"
</code></pre>

<p>and <strong>it took a little bit more than 11s to complete</strong>.</p>

<p>With Azure Resource Graph I used <code>az graph query</code> command with simple Kusto query:</p>

<pre><code>az graph query -q "where type =~ 'Microsoft.Compute/virtualMachines' | summarize count() by tostring(properties.storageProfile.osDisk.osType)" -s xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
</code></pre>

<p>The output of this command is:</p>

<pre><code>[
  {
    "count_": 7,
    "properties_storageProfile_osDisk_osType": "Linux"
  },
  {
    "count_": 3,
    "properties_storageProfile_osDisk_osType": "Windows"
  }
]
</code></pre>

<p>and <strong>it took around 1.5s to complete</strong>.</p>

<p>As you can see, in graph query I have limited query to just one subscription - it's because Azure Resource Graph is returning results for all resources in all subscriptions we have access to. I have also noticed, that this rule is not true for subscriptions where we are external AAD users with RBAC roles - in that situation we are receiving "Access denied" message.</p>

<h2 id="schema">Schema</h2>

<p>The response schema for ARG query is different than resource query. As an example, let's check a schema for query about a VM.</p>

<p>Command:</p>

<p><code>az vm show -g group -n VM0</code></p>

<p>Response:</p>

<script src="https://gist.github.com/smereczynski/8283952dcd4160a1506d2f07d2e483c4.js"></script>

<p>If you are wondering what Aliases are, check <a href="https://lnx.azurewebsites.net/saving-time-with-azure-resource-graph/script%20src=">here</a>.</p>]]></content:encoded></item><item><title><![CDATA[Postman Pre-request Script for Azure REST API]]></title><description><![CDATA[<p>When you are using Postman and you are working with Azure, there is a lack of functionality in built-in Authorization options. You can pick an oAuth 2.0 option, but there is no possibility to put "resource" parameter in token request. AWS users are probably much more happy, because they</p>]]></description><link>https://lnx.azurewebsites.net/postman-pre-request-script-for-azure-rest-api-client-credential-grant/</link><guid isPermaLink="false">ba185b9e-f935-4f4d-9cb2-129d536e80e0</guid><category><![CDATA[Azure AD]]></category><category><![CDATA[Postman]]></category><category><![CDATA[ARM]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Mon, 25 Jun 2018 05:47:35 GMT</pubDate><content:encoded><![CDATA[<p>When you are using Postman and you are working with Azure, there is a lack of functionality in built-in Authorization options. You can pick an oAuth 2.0 option, but there is no possibility to put "resource" parameter in token request. AWS users are probably much more happy, because they have a dedicated configuration option. For Azure? Not yet.</p>

<p>But it is not so complicated to do it by yourself. In a request to the ARM API (<a href="https://management.azure.com">https://management.azure.com</a>) you need to have Content-Type header and the Authorization header where Bearer token is placed. Use a variable for the token - let say <code>{{access_token}}</code>.</p>

<p>Next, you need to create a Pre-request Script to handle Access Token aquisition from oAuth endpoint in Azure Active Directory - you will find it in "Endpoints" blade inside "Application registration" blade (AAD).</p>

<p>Here you have a code I'm using for Pre-request Script:</p>

<script src="https://gist.github.com/smereczynski/5a558a82ba4430b15f6fc8d478edbf2c.js"></script>

<p>As you can see, I'm not hardcoding <code>client_id</code> (Application ID), <code>client_secret</code> (Application Key) and <code>tenant</code>. I have it written in my Postman Environment. This is the same place, where <code>access_token</code> is written, when acquired from oAuth endpoint.</p>]]></content:encoded></item><item><title><![CDATA[Ubuntu 18.04 in Azure Marketplace]]></title><description><![CDATA[<p>Ubuntu 18.04-LTS is still not listed as a choice in Ubuntu VM image group in Azure Marketplace, but it is already listed as a standalone option for VM deployment.</p>

<p>You can find it under <a href="https://portal.azure.com/#create/Canonical.UbuntuServer1804LTS-ARM">this</a> address.</p>]]></description><link>https://lnx.azurewebsites.net/ubuntu-18-04-in-azure-marketplace/</link><guid isPermaLink="false">a3192470-a751-4e5d-be16-aa4650ab9d76</guid><category><![CDATA[Azure]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Tue, 29 May 2018 17:42:41 GMT</pubDate><content:encoded><![CDATA[<p>Ubuntu 18.04-LTS is still not listed as a choice in Ubuntu VM image group in Azure Marketplace, but it is already listed as a standalone option for VM deployment.</p>

<p>You can find it under <a href="https://portal.azure.com/#create/Canonical.UbuntuServer1804LTS-ARM">this</a> address.</p>]]></content:encoded></item><item><title><![CDATA[Azure Container Registry]]></title><description><![CDATA[<p>Azure Container Registry is a managed Docker registry service based on the open-source <a href="https://docs.docker.com/registry/">Docker Registry 2.0</a>.</p>

<h2 id="usecases">Use cases</h2>

<ul>
<li>Orchestration systems that manage containerized applications, like DC/OS, Docker Swarm or Kubernetes.</li>
<li>Azure services based on Docker containers, like  Azure Kubernetes Service (AKS), App Service, Batch, Azure Container Instances or</li></ul>]]></description><link>https://lnx.azurewebsites.net/azure-container-registry/</link><guid isPermaLink="false">5f268739-d77d-48af-9e8f-0fef879b7951</guid><category><![CDATA[Azure]]></category><category><![CDATA[Container Registry]]></category><category><![CDATA[Docker]]></category><category><![CDATA[ACR]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Mon, 28 May 2018 06:03:16 GMT</pubDate><content:encoded><![CDATA[<p>Azure Container Registry is a managed Docker registry service based on the open-source <a href="https://docs.docker.com/registry/">Docker Registry 2.0</a>.</p>

<h2 id="usecases">Use cases</h2>

<ul>
<li>Orchestration systems that manage containerized applications, like DC/OS, Docker Swarm or Kubernetes.</li>
<li>Azure services based on Docker containers, like  Azure Kubernetes Service (AKS), App Service, Batch, Azure Container Instances or Service Fabric.</li>
</ul>

<p>Azure Container Registry can also be used as part of a standard container development workflow. For example, as a container target for continuous integration and deployment tools like Visual Studio Team Services or Jenkins.</p>

<p>Azure Container Registry is also a suite of features that provides Docker container image builds capability in Azure. Configurable build tasks can help to automate container OS and framework patching pipelines and build images automatically when commits will come to the code repository.</p>

<h3 id="pricing">Pricing</h3>

<p>Azure Container Registry has three pricing tiers: Basic, Standard and Premium. The following table details the features and limits of the Basic, Standard, and Premium service tiers.</p>

<p>The differences in prices is "radical" but there is no problem with later tier changing when needed.</p>

<p>So, assuming full storage utilization per month we have 10GB for €4.37/month, 100GB for €17.45/month and 500GB for €43.59/month.</p>

<p>We are not limited to GB included in a pricing plan. If we want to extend it, we just need to pay €0.003/GB/day (it's less than €1 per month for additional 10GB of storage in a Basic plan).</p>

<p>If we want to use ACR for cloud-based Docker image builds, we also need to pay for it - it's an additional cost of €0.00005/second (every minute of the build will cost us €0.003, so every three minutes of build is around ¢1).</p>

<h2 id="competition">Competition</h2>

<blockquote>
  <p><a href="https://docs.docker.com/docker-hub/">Docker Hub</a> is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker Cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.</p>
</blockquote>

<h3 id="pricing">Pricing</h3>

<p>In Docker Hub we are not paying for storage and builds. There is also no pricing tiers that depend on performance, limits or features. In contrary to ACR in Docker Hub all repositories are public by default. If you want to have more than one private repository, you need to pay and the price depends on the number of repositories. There is also a difference in a number of possible parallel builds per pricing plan.</p>

<h2 id="comparison">Comparison</h2>

<p>Of course it's hard to compare Azure Container Registry with Docker Hub. The power of ACR is privacy by default, integration with Azure services and a strong security compliance inherited from Azure itself. <br>
The power of Docker Hub, in the other hand, is being a <a href="https://hub.docker.com/explore/">community hub for image creators</a> and out-of-box integration with GitHub and Bitbucket.</p>

<p>The second part of the comparison is pricing - the choice depends on your repositories characteristics and I will not summarize this comparison - You should do that.</p>

<h2 id="acrinaction">ACR in action</h2>

<script src="https://asciinema.org/a/183710.js" data-speed="3" id="asciicast-183710" async></script>]]></content:encoded></item><item><title><![CDATA[Azure SDK for Python in Docker container]]></title><description><![CDATA[<p>Using Docker can be really annoying if you are trying to use it for purposes it probably was not designed for - at least in my opinion. But it is really great solution if you don't want to maintain a VM or other virtual environment.</p>

<p>In my case, I have</p>]]></description><link>https://lnx.azurewebsites.net/azure-sdk-for-python-in-docker-container/</link><guid isPermaLink="false">d66320a0-1085-4171-b889-d3586d831e70</guid><category><![CDATA[Python]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Sat, 26 May 2018 10:17:10 GMT</pubDate><content:encoded><![CDATA[<p>Using Docker can be really annoying if you are trying to use it for purposes it probably was not designed for - at least in my opinion. But it is really great solution if you don't want to maintain a VM or other virtual environment.</p>

<p>In my case, I have Python script which I need to run periodically. I don't need and I don't want to maintain VM for that. I just want to run this script for time to time. Of course it is not just a script - it has dependencies (Azure SDK for Python), so it's more like a bundle than a script - it's normal for Python and many other languages. <br>
Docker is a perfect solution for me in that case. I can bundle SDK and other dependencies and use it as a base image for my script runtime environment - all without storing any data on image itself.</p>

<p>I have clear prerequisites:</p>

<ol>
<li>I need Azure SDK for Python.  </li>
<li>I have my script written in Python 3.6.  </li>
<li>I need to pass some parameters to my script to prevent hardcoding.  </li>
<li>I wish to run this script on almost any machine -Linux, Mac and Windows.  </li>
<li>I wish to run this script periodically.</li>
</ol>

<h2 id="azuresdkforpython">Azure SDK for Python</h2>

<p>You can find it on PyPi (<a href="https://pypi.org/project/azure/">https://pypi.org/project/azure/</a>) and you can install it using <code>pip install azure</code>. There is no philosophy here - it's open, it's developed on GitHub and it's available on PyPi.</p>

<p>An additional prerequisite for Azure SDK for Python is <em>keyrings.alt</em> package - due to <a href="https://lnx.azurewebsites.net/please-enter-password-for-encrypted-keyring-when-running-python-script-on-ubuntu/">this</a> issue.</p>

<p>So, I have:</p>

<pre><code>pip install azure keyrings.alt
</code></pre>

<h2 id="python36">Python 3.6</h2>

<p>I'm working on Python 3.6 environment locally on my computer, where I'm developing, so I wish to have the same environment on the script's runtime. It's probably compatible with 3.5, 2.4 and 3.3 but... I'm working on 3.6.</p>

<p>Let see my script (sample) - it is listing all resource groups in my subscription:</p>

<script src="https://gist.github.com/smereczynski/2bfcac4532c6b0c3576d9b54946d2761.js"></script>

<h2 id="parameters">Parameters</h2>

<p>As you can see, I'm not hardcoding stuff like tenant, application ID, application key or subscription ID in my script. I'm using <code>os.getenv()</code> to extract it from environment variables. It means, that I need to include some "sensitive" data in my environment.</p>

<h2 id="interoperability">Interoperability</h2>

<p>I don't want to focus on the question if my script will run on Windows Server or Linux... or Mac, that I'm using personally. Python is Python but... environments differs between Operating Systems. And this is the place where Docker comes to the game. it does not matter where you will run your Docker image - it will be totally the same from the code/script perspective.</p>

<p>We have two options in Docker world - first is to use pre-built images, prepared by community or team-mates; second is to use custom images we are building by our own.</p>

<p>If you are looking for ready-to-use images, check on <a href="https://hub.docker.com/explore/">Docker Hub</a> or <a href="https://store.docker.com/">Docker Store</a>.</p>

<p>But if you are looking for more flexible solution or you just want to have a lot of fun, try to build your own Docker image, using Dockerfile. As I assumed above, we need Python 3.6, <em>azure</em> package and <em>keyrings.alt</em> package. Let's create Dockerfile for that:</p>

<script src="https://gist.github.com/smereczynski/fc518ca22771368a3820edd23fb525c9.js"></script>

<p>As you can see, it's really simple. We are getting Python image with tag pointing for version 3.6 from Docker Hub - a community repository - and it is an official Python image for docker. You can check it <a href="https://store.docker.com/images/python">here</a>. <br>
The second step is to install packages wee need on top of python image. To do that, we are using <code>pip</code> of course. Image building means, that having Python 3.6 image, we are installing additional packages we want, and then we are generating new image based on the base one and changes we made. After that, we have a static environment image with Python 3.6 and Azure SDk for Python.</p>

<h2 id="complexdockerimage">Complex Docker image</h2>

<p>Having our script and a Docker image based on Python 3.6 official image, we can prepare a complex Docker image with ready-to-use solution. We need to merge the script with the environment image. We will do it using Dockerfile, adding script to it:</p>

<script src="https://gist.github.com/smereczynski/287b634f41af8a5d84c15eddb3ec6ca1.js"></script>

<h2 id="runtime">Runtime</h2>

<p>Assuming, that we have created image above, we have a complex solution: we have a Python 3.6 interpreter, an Azure SDK for Python and <em>keyrings.alt</em> package. But when this image will runs, it will do... nothing. The script is inside, but the command is not declared. We need to declare what the image will do on startup:</p>

<script src="https://gist.github.com/smereczynski/c0a61f24a6bfcda69ebaac030ddb9eb7.js"></script>

<p>And this is a complete solution. On startup, conainer will run <em>run.py</em> script, using Python 3.6 interpreter, where Azure SDk for Python is installed along with <em>keyrings.alt</em> package.</p>

<h2 id="build">Build</h2>

<p>At this point, we need to build our Docker imageand to do so we need to have <em>run.py</em> script and a <em>Dockerfile</em> in the same directory. Using shell where docker is installed - no matter on what OS - go to this directory and perform a command:</p>

<pre><code>docker build -t imagename .
</code></pre>

<p>We have built the Docker image based on definition from Dockerfile, tagging it as "imagename" which tag will be used as the image name when running.</p>

<h2 id="run">Run</h2>

<p>No we know three things:</p>

<ol>
<li>We have an "imagename" Docker image.  </li>
<li>We want to run <em>run.py</em> script which is on that image.  </li>
<li>We need to "inject" environment variables with sensitive data to the runtime.</li>
</ol>

<p>The <em>run.py</em> script will run automatically because we built an image in that way. Only thing we need is to pass environment variables to the image on startup. To do it, we will use <em>-e</em> arguments to the *docker run * command. Let's do it:</p>

<pre><code>docker run -e "TENANT_ID=&lt;tenant_id&gt;" -e "CLIENT=&lt;applicatiom_id&gt;" -e "KEY=&lt;key&gt;" -e "SUBSCRIPTION=&lt;subscription_id&gt;" imagename:latest   
</code></pre>

<h3 id="conclusion">Conclusion</h3>

<p>The script should run and should start treating Docker as your daily-basis tool. Not because it's fancy and cool. Just because it's easy to use, simple and it works almost everywhere.</p>

<p>If you don't want to wait for image to build with SDK, I have created a ready-to-use image. You can find it <a href="https://hub.docker.com/r/smereczynski/docker-azure-sdk-for-python/">here</a> and use as a base:</p>

<script src="https://gist.github.com/smereczynski/6fdde20d706123002f635d093f5de5d4.js"></script>]]></content:encoded></item><item><title><![CDATA["Please enter password for encrypted keyring" when running Python script on Ubuntu]]></title><description><![CDATA[<p>During the custom Linux build agent deployment for Visual Studio Team Services (Ubuntu Server 18.04-LTS) I noticed that there is an issue with something I did not noticed previously on Ubuntu Server and I'm not noticing on my Mac OS X. When I'm trying to run Python (3.6)</p>]]></description><link>https://lnx.azurewebsites.net/please-enter-password-for-encrypted-keyring-when-running-python-script-on-ubuntu/</link><guid isPermaLink="false">a651745b-cc37-479f-8595-fd097cec0963</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[Python]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Azure AD]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Sun, 29 Apr 2018 16:03:08 GMT</pubDate><content:encoded><![CDATA[<p>During the custom Linux build agent deployment for Visual Studio Team Services (Ubuntu Server 18.04-LTS) I noticed that there is an issue with something I did not noticed previously on Ubuntu Server and I'm not noticing on my Mac OS X. When I'm trying to run Python (3.6) script, which is using Azure SDK, I'm receiving interactive prompt "Please enter password for encrypted keyring". It's not a problem to create keyring and provide a password when running script interactively but when we are talking about automation or in a context of VSTS it is not acceptable to have any interactive steps in the process. I have started some investigation on that.</p>

<h2 id="whoisasking">Who is asking</h2>

<p>When I have terminated this interactive prompt I have got a traceback pointing where the prompt starts in a code:</p>

<pre><code>Please enter password for encrypted keyring:
  File "/usr/local/lib/python3.6/dist-packages/msrestazure/azure_active_directory.py", line 448, in __init__
self.set_token()
  File "/usr/local/lib/python3.6/dist-packages/msrestazure/azure_active_directory.py", line 485, in set_token
self._default_token_cache(self.token)
  File "/usr/local/lib/python3.6/dist-packages/msrestazure/azure_active_directory.py", line 207, in _default_token_cache
keyring.set_password(self.cred_store, self.store_key, str(token))
  File "/usr/lib/python3/dist-packages/keyring/core.py", line 47, in set_password
_keyring_backend.set_password(service_name, username, password)
  File "/usr/lib/python3/dist-packages/keyrings/alt/file_base.py", line 135, in set_password
password_encrypted = self.encrypt(password.encode('utf-8'), assoc)
  File "/usr/lib/python3/dist-packages/keyrings/alt/file.py", line 206, in encrypt
cipher = self._create_cipher(self.keyring_key, salt, IV)
  File "/usr/lib/python3/dist-packages/keyring/util/properties.py", line 56, in __get__
return self.fget(obj)
  File "/usr/lib/python3/dist-packages/keyrings/alt/file.py", line 96, in keyring_key
self._unlock()
  File "/usr/lib/python3/dist-packages/keyrings/alt/file.py", line 186, in _unlock
'Please enter password for encrypted keyring: ')
  File "/usr/lib/python3.6/getpass.py", line 77, in unix_getpass
passwd = _raw_input(prompt, stream, input=input)
  File "/usr/lib/python3.6/getpass.py", line 146, in _raw_input
line = input.readline()
</code></pre>

<p>I have focused on <code>File "/usr/local/lib/python3.6/dist-packages/msrestazure/azure_active_directory.py", line 207, in _default_token_cache</code>.</p>

<p>What I hound there is:</p>

<pre><code>def _default_token_cache(self, token):
    """Store token for future sessions.

    :param dict token: An authentication token.
    :rtype: None
    """
    self.token = token
    if keyring:
        try:
            keyring.set_password(self.cred_store, self.store_key, str(token))
        except Exception as err:
            _LOGGER.warning("Keyring cache token has failed: %s", str(err))
</code></pre>

<p>So, it is nothing more, than using system keyring to store oAuth2 token for future use. <code>msrestazure/azure_active_directory.py</code> - which is used by Azure SDK for Python, is checking if <code>keyring</code> Python module is used in our system and if yes, it is using it to store oAuth token. Clear. </p>

<h2 id="whyitstartedtohappen">Why it started to happen?</h2>

<p>I'm using Ubuntu Server 16.04-LTS for a long time, and there is no issue like that there. Why it is on 18.04? I made some more investigation on that and now I know.</p>

<p>When installing <code>python3-pip</code> package on Ubuntu 18.04 I noticed that <code>python3-keyring</code> and <code>python3-keyrings.alt</code> are also installed. Knowing that it's clear that <code>_default_token_cache()</code> function in <code>msrestazure/azure_active_directory.py</code> will decide to use keyring.</p>

<p>I have checked where it started looking at <code>python3-pip</code> dependencies and recommendations at <a href="https://packages.ubuntu.com/bionic/python3-pip">packages.ubuntu.com</a>. One of the package recommendations, and in fact used modules, is <code>python3-wheel</code> - and this module <a href="https://packages.ubuntu.com/bionic/python3-wheel">has</a>:</p>

<p><strong>python3-keyring</strong> - 
store and access your passwords safely - Python 3 version of the package</p>

<p>and</p>

<p><strong>python3-keyrings.alt</strong> - 
alternate backend implementations for python3-keyring</p>

<p>as recommended dependencies. So, when installing <code>python3-pip</code> package in default mode, we will always get <code>python3-keyring</code> also installed.</p>

<p>There is no such recommendation in <a href="https://packages.ubuntu.com/xenial/python3-wheel">16.04-LTS</a>.</p>

<h2 id="solution">Solution</h2>

<p>Of course we can try to not install <code>python3-keyring</code> or remove it. But it can be needed or even required by other packages or modules. If we do not need encryption, we can use the PlaintextKeyring backend which does not need any password. We have at least two options to do it and get back to the state where keyring is not used.</p>

<h3 id="configurationfile">Configuration file</h3>

<p>Edit ~/.local/share/python_keyring/keyringrc.cfg:</p>

<pre><code>[backend]
default-keyring=keyrings.alt.file.PlaintextKeyring
</code></pre>

<h3 id="pythonapi">Python API</h3>

<p>Add this to your code:</p>

<pre><code>import keyring.backend
from keyrings.alt.file import PlaintextKeyring

keyring.set_keyring(PlaintextKeyring())
</code></pre>

<h2 id="security">Security</h2>

<p>The safety of using <code>PlaintextKeyring()</code> as token storage is another story and it will be probably discussed on GitHub soon. I will inform you about other solutions (using keyring if possible) or security conclusions.</p>]]></content:encoded></item><item><title><![CDATA[Ubuntu 18.04-LTS (Bionic Beaver) on Azure]]></title><description><![CDATA[<p>27th of April was the release date of new LTS (Long Term Support) version of Ubuntu Linux distribution - Ubuntu 18.04 Bionic Beaver. Although, there is no 18.04 image in Azure Marketplace yet.</p>

<p>But official, stable image (not DAILY) is already there. To find it and use it,</p>]]></description><link>https://lnx.azurewebsites.net/ubuntu-18-04-lts-bionic-beaver-on-azure/</link><guid isPermaLink="false">b0fbaf90-df6c-458e-b10a-ad8286fb7f4f</guid><category><![CDATA[Azure CLI]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Sun, 29 Apr 2018 14:18:04 GMT</pubDate><content:encoded><![CDATA[<p>27th of April was the release date of new LTS (Long Term Support) version of Ubuntu Linux distribution - Ubuntu 18.04 Bionic Beaver. Although, there is no 18.04 image in Azure Marketplace yet.</p>

<p>But official, stable image (not DAILY) is already there. To find it and use it, you need to use Azure CLI (or PowerShell).</p>

<p>To find it, use:</p>

<pre><code>az vm image list --all -p Canonical -f UbuntuServer -s 18.04-LTS --query [].urn -o tsv
</code></pre>

<p>Find the one with "18.04-LTS" SKU. </p>

<p>Then, to deploy it, use:</p>

<pre><code>az vm create -n MyVm -g MyResourceGroup --image Canonical:UbuntuServer:18.04-LTS:18.04.201804262
</code></pre>]]></content:encoded></item><item><title><![CDATA[A Quick way to validate WebHook endpoint for Azure Event Grid]]></title><description><![CDATA[<p>If you need to validate a WebHook endpoint for Azure Event Grid, you have at least two options:</p>

<ol>
<li>Implement validation method in your application, which you will be calling using WebHook.  </li>
<li>Run standalone validator for a moment and then run your application on the same address.</li>
</ol>

<p>The first option can</p>]]></description><link>https://lnx.azurewebsites.net/a-quick-way-to-validate-webhook-endpoint-for-azure-event-grid/</link><guid isPermaLink="false">c368874d-9696-4baa-8338-c20f79f2b7f0</guid><category><![CDATA[Azure]]></category><category><![CDATA[Python]]></category><category><![CDATA[Event Grid]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Fri, 30 Mar 2018 09:45:24 GMT</pubDate><content:encoded><![CDATA[<p>If you need to validate a WebHook endpoint for Azure Event Grid, you have at least two options:</p>

<ol>
<li>Implement validation method in your application, which you will be calling using WebHook.  </li>
<li>Run standalone validator for a moment and then run your application on the same address.</li>
</ol>

<p>The first option can be expensive or impossible (for example if you are going to use some SaaS tool in the subdomain you own).</p>

<p>The second option is quick and cheap. I created a sample Python web server, which only purpose is to respond to Azure Event Grid WebHook validation request.</p>

<p>Feel free to use, contribute and share: <a href="https://github.com/smereczynski/Azure-EventGrid-WebHook-Validator">https://github.com/smereczynski/Azure-EventGrid-WebHook-Validator</a></p>]]></content:encoded></item><item><title><![CDATA[How to search all VM images in Azure]]></title><description><![CDATA[<p>Azure portal (code name Ibiza) is really cool, if you want to learn Azure or make some showcase. The problem is, that you will not find everything in the Portal. The great example is a daily build of your favourite Linux distribution - like Ubuntu 18.04 LTS which is</p>]]></description><link>https://lnx.azurewebsites.net/how-to-search-all-vm-images-in-azure/</link><guid isPermaLink="false">94eaedc7-a6eb-401f-b1fa-b436ecdc6d40</guid><category><![CDATA[Azure]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Azure CLI]]></category><category><![CDATA[VM]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Tue, 27 Feb 2018 19:49:41 GMT</pubDate><content:encoded><![CDATA[<p>Azure portal (code name Ibiza) is really cool, if you want to learn Azure or make some showcase. The problem is, that you will not find everything in the Portal. The great example is a daily build of your favourite Linux distribution - like Ubuntu 18.04 LTS which is not yet released today (as of February 27th). Does it mean that there is no 18.04 image in Azure? No.</p>

<p>What you need to do to find more images available is to use Azure CLI. The command we will use for VM images browsing is <code>vm image</code>. But first, we need to clarify some parameters:</p>

<ol>
<li><strong>publisher</strong> - the company or entity behind the image.  </li>
<li><strong>offer</strong> - this is the offering form the publisher.  </li>
<li><strong>sku</strong> - Stock Keeping Unit is a id assigned to an image to identify the exact product version.</li>
</ol>

<p>To list all publishers in West Europe Azure region we will perform:</p>

<pre><code>az vm image list-publishers -l westeurope --query [].name -o tsv
</code></pre>

<p>To list all offers in West Europe Azure region from choosen publisher, let say Canonical, we will perform:</p>

<pre><code>az vm image list-offers -l westeurope -p Canonical --query [].name -o tsv
</code></pre>

<p>To list all SKUs form UbuntuServer offer in West Europe Azure region we will perform:</p>

<pre><code>az vm image list-skus -l westeurope -p Canonical -f UbuntuServer --query [].name -o tsv
</code></pre>

<p>Now we need to check how VM creation is performed from Azure CLI. Typically it looks like this:</p>

<pre><code>az vm create -n MyVm -g MyResourceGroup --image UbuntuLTS
</code></pre>

<p>where --image is: The name of the operating system image as a URN alias, URN, custom image name or ID, or VHD blob URI.</p>

<p>We are not using custom image or VHD and our image is not a standard, aliased image (like UbuntuLTS). So, we need to know <code>image URN</code>. <br>
It's easy as:</p>

<pre><code>az vm image list --all -p &lt;publisher&gt; -f &lt;offer&gt; -s &lt;sku&gt; -l &lt;region&gt;
</code></pre>

<p>Example for Ubuntu Server 18.04 LTS:</p>

<pre><code>az vm image list --all -p Canonical -f UbuntuServer -s 18.04-DAILY-LTS -l westeurope
</code></pre>

<p>What we want to know is URN, so:</p>

<pre><code>az vm image list --all -p Canonical -f UbuntuServer -s 18.04-DAILY-LTS -l westeurope --query [].urn -o tsv
</code></pre>

<p>The last part of the URN is an image version - you are probably looking for the last one. For 18.04 LTS daily builds, as for February 27th, we have:</p>

<pre><code>Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802130
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802140
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802160
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802170
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802180
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802190
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802210
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802220
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802230
Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802240
</code></pre>

<p>If you want to create a VM from the last daily build's image, then:</p>

<pre><code>az vm create -n MyVm -g MyResourceGroup --image Canonical:UbuntuServer:18.04-DAILY-LTS:18.04.201802240
</code></pre>]]></content:encoded></item><item><title><![CDATA[How to remove all empty or lockless Resource Groups using Azure CLI]]></title><description><![CDATA[<p>Sometimes a little cleanup is needed on our subscriptions. The simplest way to clean up is, of course, to delete empty and not needed Resource Groups. Let's do it in Azure CLI.</p>

<p>With empty RGs it's quite easy - just list all RGs and for each RG check if there</p>]]></description><link>https://lnx.azurewebsites.net/how-to-remove-all-empty-resource-groups-using-azure-cli/</link><guid isPermaLink="false">452d8034-7c21-4942-874f-26b57734bfd2</guid><category><![CDATA[Azure CLI]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Fri, 23 Feb 2018 13:02:23 GMT</pubDate><content:encoded><![CDATA[<p>Sometimes a little cleanup is needed on our subscriptions. The simplest way to clean up is, of course, to delete empty and not needed Resource Groups. Let's do it in Azure CLI.</p>

<p>With empty RGs it's quite easy - just list all RGs and for each RG check if there are resources in it - if no, then delete:</p>

<pre><code>for i in `az group list -o tsv --query [].name`; do if [ "$(az resource list -g $i -o tsv)" ]; then echo "$i is not empty"; else az group delete -n $i -y --no-wait; fi; done
</code></pre>

<p>Cleaning not-empty RGs is much more complicated because we need to decide which of them are needed and which are not. The quickest way to do it is to add <code>Delete Lock</code> to every RG we want to leave untouched. Then we just need to iterate all RGs (as in previous example) and for each RG check if there is any Lock on it. If no Lock, then delete:</p>

<pre><code>for i in `az group list -o tsv --query [].name`; do if [ "$(az lock list -g $i -o tsv --query [].name)" ]; then echo "$i have a lock"; else az group delete -n $i -y --no-wait; fi; done
</code></pre>]]></content:encoded></item><item><title><![CDATA[Azure Resource Manager API calls from Python]]></title><description><![CDATA[<p>Direct API Calls to Azure Resource Manager REST API is useful mostly in two scenarios - when integrating ARM functions in some application and when Portal, CLI, PowerShell or SDK is not enough. Of course there is also a third scenario - when you want to learn yourself how ARM</p>]]></description><link>https://lnx.azurewebsites.net/azure-resource-manager-api-calls-from-python/</link><guid isPermaLink="false">e1d514f0-47f9-4623-8f69-f0ffb7eca962</guid><category><![CDATA[Azure]]></category><category><![CDATA[Python]]></category><category><![CDATA[Azure AD]]></category><category><![CDATA[ARM]]></category><dc:creator><![CDATA[Michal Smereczynski]]></dc:creator><pubDate>Fri, 16 Feb 2018 18:52:43 GMT</pubDate><content:encoded><![CDATA[<p>Direct API Calls to Azure Resource Manager REST API is useful mostly in two scenarios - when integrating ARM functions in some application and when Portal, CLI, PowerShell or SDK is not enough. Of course there is also a third scenario - when you want to learn yourself how ARM really works.</p>

<h2 id="azureresourcemanagerapi">Azure Resource Manager API</h2>

<p><mark>ARM REST API is well documented <a href="https://docs.microsoft.com/en-us/rest/api/">here</a></mark>. You will find there information about how to prepare for using REST API (i.e. create a Service Principal) and how to perform API Calls. There is also a (almost?) complete API Reference divided by service or resource type - it is where you will search for methods you can use for target resource type.</p>

<p>To start working with API Calls to ARM API, you need to have 5 things and know where to find 6th.</p>

<ol>
<li>Client ID  </li>
<li>Client Secret  </li>
<li>Tenant ID  </li>
<li>Resource  </li>
<li>Authority URL  </li>
<li>API version</li>
</ol>

<p>First five of six above are related directly to oAuth2 flow, where Azure AD is an Identity Provider. The sixth one is a query-string parameter which points an API version to call.</p>

<p><strong>Client ID</strong> is an Application ID you created for RBAC (as described <a href="https://lnx.azurewebsites.net/non-interactive-login-in-azure-cli-2-0/">here</a>). This is the AAD Application with a Service Principal object related to it.</p>

<p><strong>Client Secret</strong> is an AAD Application's key (password).</p>

<p><strong>Tenant ID</strong> is your AD Tenant's ID you can find in AAD Properties or in output of the <code>az ad sp create-for-rbac</code> command.</p>

<p><strong>Resource</strong> is <a href="https://management.azure.com/">https://management.azure.com/</a> - Azure Resource Manager provider APIs URI.</p>

<p><strong>Authority URL</strong> is <a href="https://login.microsoftonline.com/">https://login.microsoftonline.com/</a> - the Identity Provider address.</p>

<p><strong>API version</strong> is a query-string parameter with designated API version you should provide for service you are calling. You can find this parameter in API reference under provider of choice - here an example for <em>Resource Management / Resource Groups / List</em>.</p>

<p>Let say we want to list all Resource Groups in our subscription:</p>

<h3 id="findallsubscriptiononthetenant">Find all subscription on the tenant</h3>

<ol>
<li>We need to find proper page in Azure REST API reference: <a href="https://docs.microsoft.com/en-us/rest/api/resources/subscriptions/list">https://docs.microsoft.com/en-us/rest/api/resources/subscriptions/list</a>  </li>
<li>We need to determine API Version: <em>2016-06-01</em> (from reference page).  </li>
<li><p>We need to determine call's method and URI:</p>

<p>GET <a href="https://management.azure.com/subscriptions?api-version=2016-06-01">https://management.azure.com/subscriptions?api-version=2016-06-01</a></p></li>
</ol>

<p>As an output we will get a list (JSON) of all subscriptions on our Azure tenant.</p>

<h3 id="findallresourcegroupsinthesubscriptionofchoice">Find all resource groups in the subscription of choice</h3>

<ol>
<li>We need to find proper page in Azure REST API reference: <a href="https://docs.microsoft.com/en-us/rest/api/resources/resourcegroups/list">https://docs.microsoft.com/en-us/rest/api/resources/resourcegroups/list</a>  </li>
<li>We need to determine API Version: <em>2017-05-10</em> (from reference page).  </li>
<li><p>We need to determine call's method and URI:</p>

<p>GET <a href="https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups?api-version=2017-05-10">https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups?api-version=2017-05-10</a></p></li>
</ol>

<p>As an output we will get a list (JSON) of all resource groups in our Azure subscription of choice.</p>

<h2 id="python3">Python (3)</h2>

<p>Handling GET, HEAD, PUT, POST, and PATCH methods in Python can be implemented using many libraries and in many ways of thinking. The same situation is with oAuth flows. I decided to use <code>requests</code> (<a href="http://docs.python-requests.org/en/master/">link</a>) library for HTTP methods and <code>adal</code> (<a href="https://github.com/AzureAD/azure-activedirectory-library-for-python">Link</a>) library for Azure AD Authentication.</p>

<p>I choose <code>requests</code> because I know it and I'm using it. It is also probably the most popular library used for handling HTTP requests. <br>
I choose <code>adal</code> library because it is officially a proper way to handle authentication against AAD in Python. It is also used in Azure CLI 2.0 and Azure SDK for Python.</p>

<p>Beside of <code>requests</code> and <code>adal</code> I will also use <code>json</code> library for handling JSON requests bodies and calls responses and <code>os</code> for os environment variables handling (no credentials hardcoding!).</p>

<p>Only <code>requests</code> and <code>adal</code> libraries requires to be installed:</p>

<p><code>pip install requests adal</code></p>

<p>So, the import code block of my Python script will be as follows:</p>

<pre><code>import adal
import requests
import os
import json
</code></pre>

<p>Next, we will declare necessary variables, storing their values as environment variables:</p>

<pre><code>tenant = os.environ['TENANT']
authority_url = 'https://login.microsoftonline.com/' + tenant
client_id = os.environ['CLIENTID']
client_secret = os.environ['CLIENTSECRET']
resource = 'https://management.azure.com/'
</code></pre>

<p>I'm using venv for Python runtime and PyCharm IDE (<a href="https://www.jetbrains.com/pycharm/">link</a>) and both venv and env variables are handled by PyCharm.</p>

<p>Next, we are going to handle oAuth flow with <code>adal</code>, receiving an authorization token in an authentication context of <a href="https://login.microsoftonline.com/tenant">https://login.microsoftonline.com/tenant</a> - an identity provider:</p>

<pre><code>context = adal.AuthenticationContext(authority_url)
token = context.acquire_token_with_client_credentials(resource, client_id, client_secret)
</code></pre>

<p>We will use token variable to extract a Bearer authorization token from the response which looks like this:</p>

<pre><code>{
    "tokenType": "Bearer",
    "expiresIn": 3600,
    "expiresOn": "2018-02-16 19:55:32.068528",
    "resource": "https://management.azure.com/",
    "accessToken": "eyJ0eXAiOiJKV1QiLC...",
    "isMRRT": true,
    "_clientId": "&lt;client_id&gt;",
    "_authority": "https://login.microsoftonline.com/&lt;tenant&gt;"
}
</code></pre>

<p>The <code>accessToken</code> is a JWT - <a href="https://jwt.io/">Json Web Token</a> which can be translated on <a href="http://jwt.ms/">http://jwt.ms/</a>.</p>

<p><mark>Try to decode it by yourself - you will notice the correlation between your Application ID and token's content.</mark></p>

<p>Next we are going to start building a request to the ARM API. First, we need a headers:</p>

<pre><code>headers = {'Authorization': 'Bearer ' + token['accessToken'], 'Content-Type': 'application/json'}
</code></pre>

<p>Then, we create query-string parameter with API version (<code>2016-06-01</code> is proper version according to the documentation):</p>

<pre><code>params = {'api-version': '2016-06-01'}
</code></pre>

<p>Next, the url for getting the subscriptions list (according to the documentation):</p>

<pre><code>url = 'https://management.azure.com/' + 'subscriptions'
</code></pre>

<p>And at last, performing the request and printing the response:</p>

<pre><code>r = requests.get(url, headers=headers, params=params)
print(json.dumps(r.json(), indent=4, separators=(',', ': ')))
</code></pre>

<p>The response looks like this:</p>

<pre><code>{
    "value": [
        {
            "id": "/subscriptions/&lt;subscription_id&gt;",
            "subscriptionId": "&lt;subscription_id&gt;",
            "displayName": "&lt;subscription_name",
            "state": "Enabled",
            "subscriptionPolicies": {
                "locationPlacementId": "Public_2014-09-01",
                "quotaId": "Sponsored_2016-01-01",
                "spendingLimit": "Off"
            },
            "authorizationSource": "RoleBased"
        }
    ]
}
</code></pre>

<p>And whole script looks like this:</p>

<pre><code>import adal
import requests
import os
import json


tenant = os.environ['TENANT']
authority_url = 'https://login.microsoftonline.com/' + tenant
client_id = os.environ['CLIENTID']
client_secret = os.environ['CLIENTSECRET']
resource = 'https://management.azure.com/'
context = adal.AuthenticationContext(authority_url)
token = context.acquire_token_with_client_credentials(resource, client_id, client_secret)
headers = {'Authorization': 'Bearer ' + token['accessToken'], 'Content-Type': 'application/json'}
params = {'api-version': '2016-06-01'}
url = 'https://management.azure.com/' + 'subscriptions'

r = requests.get(url, headers=headers, params=params)

print(json.dumps(r.json(), indent=4, separators=(',', ': ')))
</code></pre>

<p>Of course GET operations are little bit simpler than PUT or POST. There is no request body to send. Let's create some resource group on our subscription (<a href="https://docs.microsoft.com/en-us/rest/api/resources/resourcegroups/createorupdate">docs</a>):</p>

<pre><code>import adal
import requests
import os
import json


tenant = os.environ['TENANT']
authority_url = 'https://login.microsoftonline.com/' + tenant
client_id = os.environ['CLIENTID']
client_secret = os.environ['CLIENTSECRET']
resource = 'https://management.azure.com/'
context = adal.AuthenticationContext(authority_url)
token = context.acquire_token_with_client_credentials(resource, client_id, client_secret)
headers = {'Authorization': 'Bearer ' + token['accessToken'], 'Content-Type': 'application/json'}
params = {'api-version': '2017-05-10'}
url = 'https://management.azure.com/subscriptions/&lt;subscription_id&gt;/resourcegroups/mytestrg'

data = {'location': 'northeurope'}

r = requests.put(url, data=json.dumps(data), headers=headers, params=params)

print(json.dumps(r.json(), indent=4, separators=(',', ': ')))
</code></pre>

<p>Note change in <code>params</code> and <code>url</code>, adding <code>data</code> and method change to <code>requests.put</code>. The response should look like this:</p>

<pre><code>{
    "id": "/subscriptions/&lt;subscription_id&gt;/resourceGroups/mytestrg",
    "name": "mytestrg",
    "location": "northeurope",
    "properties": {
        "provisioningState": "Succeeded"
    }
}
</code></pre>]]></content:encoded></item></channel></rss>