Azure: Secure Compute, Storage, and Databases
- Tony Stiles
- Jun 16, 2024
- 18 min read
Updated: Jun 18, 2024
Secure Compute, Storage, and Databases
Plan and implement advanced security for compute
Plan and implement remote access to public endpoints, including Azure Bastion and JIT
Azure Bastion is a fully managed PaaS service that provides secure and seamless RDP/SSH connectivity to Azure Virtual Machines over the public Internet. It is recommended to use Azure Bastion instead of exposing RDP/SSH ports to the public Internet, which can increase the attack surface and security risks of your Virtual Machines.
Azure Bastion provides a secure and browser-based Remote Desktop Protocol (RDP) and Secure Shell (SSH) access to your Virtual Machines through the Azure Portal. It eliminates the need to use VPN or jump hosts for secure remote access to your Virtual Machines.
Just-in-Time (JIT) VM access is a feature that allows you to restrict access to Azure Virtual Machines by opening the RDP/SSH ports only when needed for a specific time period. JIT VM access helps reduce the attack surface of your Virtual Machines and minimize the exposure to potential threats. JIT VM access is integrated with Azure Security Center and can be enabled for Virtual Machines in a few clicks.
JIT VM access provides an approval workflow that requires users to request access to Virtual Machines and obtain approval from authorized personnel before the RDP/SSH ports are opened. JIT VM access also provides audit logs that allow you to monitor and track all access requests and activities.
When planning and implementing remote access to public endpoints, you should follow the principle of least privilege and limit access to only authorized users and roles. You should also use strong passwords and multi-factor authentication to protect your accounts and credentials.
Azure Bastion and JIT VM access can be integrated with Azure AD for authentication and authorization. This allows you to leverage the benefits of Azure AD, such as conditional access policies, identity protection, and role-based access control (RBAC).
Configure network isolation for Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS) is a fully managed Kubernetes service that allows you to deploy and manage containerized applications at scale. AKS provides built-in integration with Azure networking features, such as Virtual Network (VNet) and Network Security Groups (NSGs), to help you secure and isolate your Kubernetes clusters.
By default, AKS clusters are deployed into a new or existing VNet, which provides network isolation and traffic filtering for the nodes and pods in the cluster. AKS also creates a subnet within the VNet for the nodes and a second subnet for the Kubernetes services.
AKS supports two modes of network isolation: Basic and Advanced. Basic network isolation is the default mode and provides network isolation between nodes and services within the cluster, but not between multiple AKS clusters. Advanced network isolation provides network isolation between multiple AKS clusters and allows you to peer VNets across different regions or subscriptions.
To configure network isolation for AKS, you can use the following Azure networking features:
· Virtual Network (VNet): AKS clusters are deployed into a VNet, which provides network isolation and traffic filtering for the nodes and pods in the cluster. You can create a new VNet or use an existing VNet to deploy your AKS cluster.
· Network Security Groups (NSGs): AKS uses NSGs to control inbound and outbound traffic to the nodes and pods in the cluster. You can create custom NSGs and associate them with the AKS subnets to control the traffic flow between the nodes and services.
· Azure Firewall: You can deploy Azure Firewall to provide centralized network security and traffic filtering for your AKS clusters. Azure Firewall can be deployed in a hub-and-spoke architecture to provide secure communication between different VNets and AKS clusters.
· VNet peering: You can use VNet peering to connect VNets in different regions or subscriptions and enable communication between different AKS clusters. VNet peering allows you to extend the VNet address space and enable cross-premises connectivity for your AKS clusters.
To configure Advanced network isolation for AKS, you can use the following Azure networking features:
· Azure Private Link: Azure Private Link allows you to expose your AKS services over a private endpoint within your VNet, instead of a public IP address. Private Link provides secure and private communication between AKS services and clients within the same VNet or peered VNets.
· Azure Private DNS: Azure Private DNS allows you to configure a private domain name system (DNS) zone within your VNet and map AKS services to a private DNS name. Private DNS provides name resolution and DNS caching for AKS services within the same VNet or peered VNets.
· VNet peering: VNet peering allows you to connect VNets in different regions or subscriptions and enable communication between different AKS clusters. VNet peering enables private communication between AKS clusters over the Microsoft backbone network.
Secure and monitor AKS
Use Azure RBAC to control access to AKS resources.
Use Azure Private Link to securely connect to AKS API server and Azure Container Registry.
Use Azure Policy to enforce compliance and security standards.
Use Azure Security Center to monitor and detect threats.
Use Kubernetes RBAC to control access to Kubernetes resources.
Use Kubernetes Network Policies to restrict network traffic between pods.
Use Kubernetes Secrets to store sensitive information.
Use Azure Monitor to monitor AKS clusters, applications, and infrastructure.
Use Log Analytics to analyze and troubleshoot issues in AKS.
Use container image scanning tools to detect vulnerabilities in container images.
Use Azure Key Vault to securely store and manage secrets and keys used by AKS applications.
Use Azure Active Directory for authentication and authorization of AKS cluster access.
Use Azure AD Pod Identity to manage identity of pods running in AKS.
Configure authentication for AKS
Use Azure Active Directory (Azure AD) to authenticate users and applications.
Configure Azure AD integration with AKS using the Azure CLI or Azure portal.
Use role-based access control (RBAC) to control access to Kubernetes resources.
Create Kubernetes service accounts for applications and assign them to specific roles or clusters.
Use Kubernetes secrets to store and manage sensitive information like API keys and credentials.
Use Azure AD Pod Identity to assign identities to pods running in AKS.
Use Kubernetes Authentication and Authorization plugins to control access to Kubernetes resources.
Use Azure AD groups to manage access to AKS clusters and resources.
Use Azure AD Application Roles to assign roles to applications and manage access to AKS resources.
Use Azure AD Conditional Access policies to enforce access policies and security controls.
Configure security monitoring for Azure Container Instances (ACIs)
Use Azure Security Center to monitor ACI containers for vulnerabilities and threats.
Enable container monitoring in Azure Monitor to collect and analyze logs and metrics from ACI containers.
Use Azure Log Analytics to monitor and troubleshoot issues in ACI containers.
Enable Azure Network Watcher to monitor network traffic to and from ACI containers.
Use Azure Application Insights to monitor application performance and errors in ACI containers.
Use container image scanning tools to detect vulnerabilities in container images used by ACI.
Use Azure Key Vault to securely store and manage secrets and keys used by ACI applications.
Use Azure AD Pod Identity to manage identity of pods running in ACI.
Use Azure AD authentication and authorization to control access to ACI resources.
Use RBAC to control access to ACI resources.
Enable auditing of ACI containers using Azure Policy or Azure Security Center.
Configure security monitoring for Azure Container Apps (ACAs)
Use Azure Security Center to monitor ACA containers for vulnerabilities and threats.
Enable container monitoring in Azure Monitor to collect and analyze logs and metrics from ACA containers.
Use Azure Log Analytics to monitor and troubleshoot issues in ACA containers.
Enable Azure Network Watcher to monitor network traffic to and from ACA containers.
Use Azure Application Insights to monitor application performance and errors in ACA containers.
Use container image scanning tools to detect vulnerabilities in container images used by ACA.
Use Azure Key Vault to securely store and manage secrets and keys used by ACA applications.
Use Azure AD Pod Identity to manage identity of pods running in ACA.
Use Azure AD authentication and authorization to control access to ACA resources.
Use RBAC to control access to ACA resources.
Enable auditing of ACA containers using Azure Policy or Azure Security Center.
Manage access to Azure Container Registry (ACR)
Use Azure AD to authenticate users and services to ACR.
Use ACR's built-in roles (such as ACR Administrator, ACR Contributor, and ACR Reader) to grant access to ACR resources.
Use Azure role-based access control (RBAC) to grant granular access to ACR resources, such as repositories and tags.
Use Azure AD groups to manage access to ACR resources for multiple users and services.
Use ACR webhooks to integrate with external services and trigger actions based on events in ACR, such as new image pushes or repository deletions.
Use ACR geo-replication to replicate ACR images across multiple regions for improved availability and performance.
Use Azure Private Link to securely access ACR over a private endpoint within your virtual network.
Use ACR's built-in policies to enforce compliance and security requirements on container images stored in ACR.
Use Azure Policy to enforce organizational policies and compliance requirements on ACR resources.
Configure disk encryption, including Azure Disk Encryption (ADE), encryption as host, and confidential disk encryption
Azure Disk Encryption (ADE) is a built-in disk encryption solution for Azure Virtual Machines (VMs) that uses the BitLocker feature of Windows and the dm-crypt feature of Linux to encrypt data at rest.
ADE uses Azure Key Vault to manage and store encryption keys.
Encryption as host is a new feature in Azure that allows customers to bring their own encryption keys (BYOK) for disk encryption on Azure VMs. This feature uses Azure Disk Encryption (ADE) to perform the actual encryption of the disks, but the encryption keys are managed by the customer.
Confidential disk encryption is a new feature in Azure that provides encryption of data at rest using hardware-based Trusted Platform Module (TPM) technology. This feature is available for Azure VMs and Azure Managed Disks, and uses virtualization-based security (VBS) to protect encryption keys and data.
Confidential disk encryption requires VMs to be deployed on Azure Confidential Computing (ACC) infrastructure.
To configure disk encryption using ADE, you need to create an Azure Key Vault and configure a key and a key policy for ADE to use.
You also need to enable encryption on the VM's OS disk or data disks, either during VM creation or by using Azure PowerShell or Azure CLI.
Encryption as host requires you to bring your own encryption keys and upload them to Azure Key Vault.
You also need to configure Azure Disk Encryption (ADE) to use the customer-managed keys for disk encryption.
Confidential disk encryption requires you to deploy VMs on ACC infrastructure and configure the disks to use the confidential computing technology.
Recommend security configurations for Azure API Management
Use Azure AD for authentication: Azure API Management allows you to integrate with Azure Active Directory (Azure AD) to authenticate users and authorize access to your APIs. This provides a secure and scalable way to manage access to your APIs and can be integrated with other Azure services for advanced security features like conditional access and multi-factor authentication.
Use HTTPS for secure communication: Use HTTPS to encrypt the communication between clients and your API Management instance. You can use a custom domain name and a certificate from a trusted certificate authority (CA) to secure your API Management instance.
Implement rate limiting: Implementing rate limiting can help prevent abuse and protect your APIs from excessive traffic or denial-of-service (DoS) attacks. You can configure rate limits based on IP address, user, or subscription.
Use IP restrictions: Use IP restrictions to allow access to your APIs only from trusted IP addresses. This can help prevent unauthorized access and protect your APIs from attacks like cross-site request forgery (CSRF) and SQL injection.
Implement OAuth2.0 for authorization: Azure API Management supports OAuth2.0 authorization flows for granting access to your APIs. This can help you control access to your APIs and manage user permissions.
Use API keys: Use API keys to authenticate clients and track usage of your APIs. You can create multiple keys for each API and revoke them as needed to control access.
Implement logging and monitoring: Use Azure Monitor and API Management logs to monitor usage and detect suspicious activity. You can also use Application Insights to monitor performance and troubleshoot issues.
Plan and implement security for storage
Configure access control for storage accounts
Use Azure Role-Based Access Control (RBAC): Azure RBAC allows you to assign roles to users, groups, and applications to control access to Azure resources. You can assign roles at the subscription, resource group, or resource level.
Grant permissions: Use RBAC to grant permissions to users or groups to access specific storage accounts. You can assign roles such as Storage Account Contributor, Storage Account Owner, or Storage Account Reader to control access to the storage account.
Use shared access signatures (SAS): Use SAS to grant temporary access to specific resources or containers within a storage account. You can set permissions and expiration times for SAS tokens.
Use virtual networks: Use virtual networks to restrict access to storage accounts to specific IP addresses or ranges. You can configure firewall and virtual network settings to allow access from specific virtual networks and subnets.
Use service endpoints: Use service endpoints to allow access to storage accounts only from within a virtual network. This can help secure your storage accounts by limiting access to only trusted networks.
Use network rules: Use network rules to allow or deny access to storage accounts based on IP address ranges. You can configure network rules to allow access from specific IP addresses or ranges and block all other traffic.
Monitor access: Use Azure Monitor to monitor access to storage accounts and detect suspicious activity. You can enable logging and auditing to track changes and access to your storage accounts.
Manage life cycle for storage account access keys
Access keys are used to authenticate requests to Azure storage accounts.
Storage accounts have two access keys, which are used for redundancy and to allow for key rotation.
Azure recommends using Azure Active Directory (Azure AD) for authentication to storage accounts whenever possible.
Access keys should be rotated regularly for security purposes.
Azure provides the option to regenerate access keys manually or automatically.
To regenerate access keys automatically, you can configure a key rotation policy for your storage account. This policy specifies the interval at which keys are regenerated.
Azure also provides the option to manage access keys programmatically using the Azure SDK or PowerShell.
Best practices for managing access keys include storing them securely, limiting access to them, and monitoring access to them for suspicious activity.
Select and configure an appropriate method for access to Azure Files
Azure Files is a fully managed file share solution in the cloud.
There are several methods for accessing Azure Files, including SMB (Server Message Block), NFS (Network File System), and REST API.
SMB is the most commonly used method for accessing Azure Files, and is compatible with Windows and Linux operating systems.
NFS is primarily used by Linux operating systems and is supported in preview mode in Azure.
REST API can be used for programmatic access to Azure Files, and is supported by a wide range of programming languages and platforms.
Access to Azure Files can be configured using shared access signatures (SAS), Azure AD authentication, or network-based access control.
SAS provides a secure way to grant limited access to Azure Files, and can be used to restrict access to specific files or folders.
Azure AD authentication allows users to access Azure Files using their Azure AD credentials, which can be more secure than using SAS.
Network-based access control can be used to restrict access to Azure Files based on IP address ranges or virtual network service endpoints.
When selecting a method for access to Azure Files, consider factors such as security, compatibility with your operating system and applications, and ease of configuration and management.
Select and configure an appropriate method for access to Azure Blob Storage
Azure Portal: You can use the Azure portal to upload and download files from Azure Blob Storage. You can also use it to manage your blob storage accounts and containers.
Azure Storage Explorer: Azure Storage Explorer is a free, standalone application that allows you to easily work with Azure Blob Storage. It is available for Windows, macOS, and Linux.
Azure PowerShell: You can use Azure PowerShell to create, manage, and delete Azure Blob Storage containers and blobs. PowerShell scripts can also be used to automate tasks.
Azure CLI: Azure CLI is a command-line interface that you can use to manage Azure resources, including Azure Blob Storage. It is available on Windows, macOS, and Linux.
REST API: The Azure Blob Storage REST API provides a way to programmatically access Blob Storage. This allows you to create, manage, and delete blobs and containers programmatically from your applications.
SDKs: Microsoft provides SDKs for a variety of programming languages, including .NET, Java, Python, and Node.js. These SDKs allow you to interact with Azure Blob Storage programmatically and integrate it into your applications.
Select and configure an appropriate method for access to Azure Tables
Azure Storage Explorer: Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work with Azure Storage data on Windows, macOS, and Linux. It provides a graphical user interface to browse, manage, and access Azure Table data.
Azure Portal: You can access and manage Azure Tables via the Azure Portal. Navigate to the storage account that contains the table and select the table you want to access.
Azure Storage REST API: You can use the Azure Storage REST API to programmatically access Azure Tables. This method requires you to write code to send HTTP requests and receive responses that contain table data.
Azure Storage Client Libraries: You can use the Azure Storage Client Libraries to access Azure Tables programmatically. The client libraries provide a set of abstractions that simplify working with Azure Tables, and they support multiple programming languages and platforms, including .NET, Java, Python, and Node.js.
Select and configure an appropriate method for access to Azure Queues
Azure Queues allow applications to asynchronously communicate between components by passing messages or files between components. Azure Queues supports two types of authentication to access the queue service:
· Shared Key Authorization: This is a simple and commonly used method that requires the storage account name and account key to authenticate the request.
· Shared Access Signatures (SAS): SAS provides a more granular control over access to the queue service, allowing you to specify access permissions, start and expiry times for the permission, and the IP address range for the permission.
To access Azure Queues, you can use the following methods:
· Azure Portal: You can use the Azure portal to manage your queues and perform operations such as creating a new queue, adding messages to a queue, and monitoring the status of your queues.
· Azure Storage SDK: You can use the Azure Storage SDK for .NET, Java, Python, Node.js, Ruby, and other programming languages to access queues programmatically.
· REST API: You can use the REST API to access queues programmatically. You can use any programming language that supports HTTP requests to interact with the API.
· Azure Storage Explorer: Azure Storage Explorer is a free, cross-platform tool that allows you to access and manage your queues, as well as other Azure storage resources, from a graphical user interface (GUI).
Select and configure appropriate methods for protecting against data security threats, including soft delete, backups, versioning, and immutable storage
Soft delete: Enables recovery of data that was deleted accidentally or maliciously. When soft delete is enabled, deleted objects are retained for a specific period before they are permanently deleted.
Backups: Azure Storage provides several backup options to help protect against data loss. These include snapshots, incremental backups, and geo-redundant storage.
Versioning: Azure Blob Storage provides versioning support, which allows you to preserve and restore previous versions of blobs. This feature helps protect against accidental or malicious overwrites or deletions.
Immutable storage: Azure Blob Storage provides immutable storage, which allows you to store data in a write-once-read-many (WORM) state. This helps protect against accidental or malicious modification or deletion of data.
Configure Bring your own key (BYOK)
Bring your own key (BYOK) is a feature in Azure that allows customers to bring their own encryption keys to encrypt and decrypt their data.
BYOK is available for several Azure services, including Azure Storage, Azure Disk Encryption, and Azure Virtual Machines.
With BYOK, customers can use their own keys to protect their data in Azure, instead of relying on keys managed by Azure.
BYOK requires the use of Azure Key Vault, a cloud-based service for storing and managing cryptographic keys, secrets, and certificates.
To configure BYOK, customers first need to create a key vault in their Azure subscription and then generate a key or upload an existing key to the key vault.
Once the key is in the key vault, customers can configure their Azure services to use the key for encryption and decryption.
BYOK provides customers with greater control over their data encryption keys, which can help them meet regulatory and compliance requirements. However, it also requires additional management overhead and increases the risk of key loss or compromise.
Enable double encryption at the Azure Storage infrastructure level
Generate or obtain a customer-managed key (CMK) from Azure Key Vault.
Configure your storage account to use a CMK as the key for encrypting blob and file data.
Configure your storage account to require encryption for all incoming requests.
Ensure that all data is encrypted before it is stored in the storage account.
Monitor and manage access to the CMK and storage account to ensure proper security.
Plan and implement security for Azure SQL Database and Azure SQL Managed Instance
Enable database authentication by using Microsoft Azure AD
Azure AD is a cloud-based identity and access management service that allows you to manage user identities and access to resources in the cloud.
Azure AD can be used to authenticate users and applications to a variety of Azure services, including Azure SQL Database, Azure Synapse Analytics, and Azure Data Lake Storage Gen2.
By using Azure AD authentication, you can simplify user management, reduce the risk of password-based attacks, and enable single sign-on across Azure services.
To enable Azure AD authentication for an Azure SQL Database or Azure Synapse Analytics instance, you need to create an Azure AD server-level or database-level principal and grant it appropriate permissions.
You can create an Azure AD principal by using the Azure portal, Azure PowerShell, or Azure CLI.
Once you have created an Azure AD principal, you can use it to connect to your Azure SQL Database or Azure Synapse Analytics instance using Azure AD authentication.
To enable Azure AD authentication for Azure Data Lake Storage Gen2, you need to create an Azure AD application and grant it appropriate permissions.
You can create an Azure AD application by using the Azure portal or Azure PowerShell.
Once you have created an Azure AD application, you can use it to authenticate requests to your Azure Data Lake Storage Gen2 account by using OAuth 2.0.
Enable database auditing
Azure provides the capability to audit database activities such as queries, login events, and changes to schema or data.
Auditing can be enabled on the server or database level and requires an Azure Storage account to store audit logs.
To enable auditing for a server or database, navigate to the Azure portal and select the server or database, then click on "Auditing" in the menu.
Select the events to audit and configure the audit logs storage settings.
Auditing can also be configured using Azure PowerShell, Azure CLI, or ARM templates.
Audit logs can be viewed and analyzed using Azure Monitor or third-party tools.
Identify use cases for the Microsoft Purview governance portal
Microsoft Purview governance portal is used to manage and govern your organization's data. Some of the use cases for Microsoft Purview governance portal include:
Data discovery and classification - Purview helps you discover and classify your organization's data assets, and provides a unified view of your data landscape.
Data cataloging - Purview allows you to catalog your data assets, making it easy for users to find, understand, and use the data they need.
Data lineage and impact analysis - Purview provides a detailed view of the lineage of your data, helping you understand how it is transformed and used across your organization.
Data privacy and compliance - Purview helps you comply with data privacy regulations and policies by providing a centralized platform to manage data access and permissions.
Data analytics and insights - Purview integrates with other Azure services to provide powerful analytics and insights on your data assets.
Implement data classification of sensitive information by using the Microsoft Purview governance portal
To implement data classification of sensitive information by using the Microsoft Purview governance portal, you can follow these steps:
· Connect your data sources to Purview: Start by connecting your data sources to the Purview governance portal. This can be done using a range of connectors available in Purview, including for Azure Data Services, on-premises data sources, and SaaS applications.
· Discover and scan your data: Use the Purview data discovery and scanning capabilities to identify sensitive data across your data sources. You can create scans based on predefined templates, or define your own custom scans.
· Classify your data: Once your scans are complete, use the Purview classification capabilities to assign sensitivity labels to your data. Purview provides built-in classifiers for common data types such as credit card numbers, social security numbers, and personally identifiable information (PII). You can also define your own custom classifiers to identify additional sensitive data types.
· Monitor and manage data access: Use Purview's data access management capabilities to control who can access your sensitive data. You can define access policies based on user roles, and set up alerts to notify you when there are unauthorized access attempts.
· Maintain compliance: Use Purview's compliance reporting capabilities to monitor and report on your data governance and compliance efforts. Purview provides prebuilt compliance reports for common regulatory standards such as GDPR and CCPA.
Plan and implement dynamic masking
Dynamic masking is available in Azure SQL Database and SQL Server on-premises.
Dynamic masking can be used to mask sensitive data based on various criteria, such as user or role, without altering the data in the database.
Dynamic masking can mask data in various formats, such as full or partial masking, format preserving masking, or random masking.
You can implement dynamic masking using T-SQL statements or by using the Azure portal or Azure PowerShell.
Dynamic masking can be implemented on column level or row level.
When implementing dynamic masking, you should carefully consider the sensitivity of the data, the access requirements, and the applicable regulations or compliance standards.
Dynamic masking should be tested thoroughly to ensure that it is working as expected and not causing any performance issues or unintended consequences.
Implement Transparent Database Encryption (TE)
To implement TDE in Azure SQL Database, you need to follow these steps:
1. Create a server-level firewall rule to allow traffic from Azure services. This is necessary because the encryption process is performed by an Azure service.
2. Create a server certificate or use an existing one.
3. Create a database encryption key (DEK) in the master database.
4. Encrypt the DEK with the server certificate.
5. Enable TDE for the database.
Once TDE is enabled, the database files are encrypted at rest. When a user or application connects to the database, the data is automatically decrypted, so there is no impact on the application.
It's important to note that TDE only encrypts the database files at rest. If you need to encrypt data in transit, you will need to use other encryption methods, such as SSL or TLS.
Azure SQL Database Always Encrypted should be used when there is a requirement to protect sensitive data such as personally identifiable information (PII) or financial information from unauthorized access or data breaches. With Always Encrypted, sensitive data is encrypted at rest, in transit, and in use, providing end-to-end encryption and minimizing the risk of data breaches. The encryption keys are managed outside of the database, and only authorized parties have access to the keys, making it more difficult for attackers to gain access to the sensitive data. Always Encrypted is particularly useful for scenarios where third-party applications or services need to access the database, but the database owner wants to ensure that sensitive data is protected even if the third-party system is com