azure.storage.blob package Azure SDK for Python 2.0.0 documentation

Publish date: 2024-05-26
exception azure.storage.blob.PartialBatchErrorException(message, response, parts)[source]

There is a partial failure in batch operations.

Parametersraise_with_traceback() with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

args class azure.storage.blob.AccessPolicy(permission=None, expiry=None, start=None)[source]

Access Policy class used by the set and get access policy methods in each service.

A stored access policy can specify the start time, expiry time, and permissions for the Shared Access Signatures with which it’s associated. Depending on how you want to control access to your resource, you can specify all of these parameters within the stored access policy, and omit them from the URL for the Shared Access Signature. Doing so permits you to modify the associated signature’s behavior at any time, as well as to revoke it. Or you can specify one or more of the access policy parameters within the stored access policy, and the others on the URL. Finally, you can specify all of the parameters on the URL. In this case, you can use the stored access policy to revoke the signature, but not to modify its behavior.

Together the Shared Access Signature and the stored access policy must include all fields required to authenticate the signature. If any required fields are missing, the request will fail. Likewise, if a field is specified both in the Shared Access Signature URL and in the stored access policy, the request will fail with status code 400 (Bad Request).

Parametersas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.AccountSasPermissions(read=False, write=False, delete=False, list=False, add=False, create=False, update=False, process=False, delete_previous_version=False, **kwargs)[source]

ResourceTypes class to be used with generate_account_sas function and for the AccessPolicies used with set_*_acl. There are two types of SAS which may be used to grant resource access. One is to grant access to a specific resource (resource-specific). Another is to grant access to the entire service for a specific account and allow certain operations based on perms found here.

ParametersKeyword Argumentsclassmethod from_string(permission)[source]

Create AccountSasPermissions from a string.

To specify read, write, delete, etc. permissions you need only to include the first letter of the word in the string. E.g. for read and write permissions you would provide a string “rw”.

Parameters

permission (str) – Specify permissions in the string with the first letter of the word.

Returns

An AccountSasPermissions object

Return type

AccountSasPermissions

class azure.storage.blob.ArrowDialect(type, **kwargs)[source]

field of an arrow schema.

All required parameters must be populated in order to send to Azure.

Parameters

type (ArrowType) – Arrow field type.

Keyword Argumentsas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.ArrowType(value)[source]

An enumeration.

BOOL = 'bool' DECIMAL = 'decimal' DOUBLE = 'double' INT64 = 'int64' STRING = 'string' TIMESTAMP_MS = 'timestamp[ms]' class azure.storage.blob.BlobAnalyticsLogging(**kwargs)[source]

Azure Analytics Logging settings.

Keyword Argumentsas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.BlobBlock(block_id, state=<BlockState.Latest: 'Latest'>)[source]

BlockBlob Block class.

ParametersVariables

size (int) – Block size in bytes.

get(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.BlobClient(account_url: str, container_name: str, blob_name: str, snapshot: Optional[Union[str, Dict[str, Any]]] = None, credential: Optional[Any] = None, **kwargs: Any)[source]

A client to interact with a specific blob, although that blob may not yet exist.

For more optional configuration, please click here.

ParametersKeyword Arguments

Example:

Creating the BlobClient from a URL to a public blob (no auth needed).
from azure.storage.blob import BlobClient blob_client = BlobClient.from_blob_url(blob_url="https://account.blob.core.windows.net/container/blob-name") 
Creating the BlobClient from a SAS URL to a blob.
from azure.storage.blob import BlobClient sas_url = "https://account.blob.core.windows.net/container/blob-name?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D" blob_client = BlobClient.from_blob_url(sas_url) 
abort_copy(copy_id: Union[str, Dict[str, Any], BlobProperties], **kwargs: Any)None[source]

Abort an ongoing copy operation.

This will leave a destination blob with zero length and full metadata. This will raise an error if the copy operation has already ended.

Parameters

copy_id (str or BlobProperties) – The copy operation to abort. This can be either an ID string, or an instance of BlobProperties.

Return type

None

Example:

Abort copying a blob from URL.
# Passing in copy id to abort copy operation copied_blob.abort_copy(copy_id) # check copy status props = copied_blob.get_blob_properties() print(props.copy.status) 
acquire_lease(lease_duration: int = - 1, lease_id: Optional[str] = None, **kwargs: Any)BlobLeaseClient[source]

Requests a new lease.

If the blob does not have an active lease, the Blob Service creates a lease on the blob and returns a new lease.

ParametersKeyword ArgumentsReturns

A BlobLeaseClient object.

Return type

BlobLeaseClient

Example:

Acquiring a lease on a blob.
# Acquire a lease on the blob lease = blob_client.acquire_lease() # Delete blob by passing in the lease blob_client.delete_blob(lease=lease) 
append_block(data: Union[AnyStr, Iterable[AnyStr], IO[AnyStr]], length: Optional[int] = None, **kwargs)Dict[str, Union[str, datetime, int]][source]

Commits a new block of data to the end of the existing append blob.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag, last modified, append offset, committed block count).

Return type

dict(str, Any)

append_block_from_url(copy_source_url: str, source_offset: Optional[int] = None, source_length: Optional[int] = None, **kwargs)Dict[str, Union[str, datetime, int]][source]

Creates a new block to be committed as part of a blob, where the contents are read from a source url.

ParametersKeyword Argumentsclear_page(offset: int, length: int, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Clears a range of pages.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

close()

This method is to close the sockets opened by the client. It need not be used when using with a context manager.

commit_block_list(block_list: List[BlobBlock], content_settings: Optional[ContentSettings] = None, metadata: Optional[Dict[str, str]] = None, **kwargs)Dict[str, Union[str, datetime]][source]

The Commit Block List operation writes a blob by specifying the list of block IDs that make up the blob.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

create_append_blob(content_settings: Optional[ContentSettings] = None, metadata: Optional[Dict[str, str]] = None, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Creates a new Append Blob.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict[str, Any]

create_page_blob(size: int, content_settings: Optional[ContentSettings] = None, metadata: Optional[Dict[str, str]] = None, premium_page_blob_tier: Optional[Union[str, PremiumPageBlobTier]] = None, **kwargs)Dict[str, Union[str, datetime]][source]

Creates a new Page Blob of the specified size.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict[str, Any]

create_snapshot(metadata: Optional[Dict[str, str]] = None, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Creates a snapshot of the blob.

A snapshot is a read-only version of a blob that’s taken at a point in time. It can be read, copied, or deleted, but not modified. Snapshots provide a way to back up a blob as it appears at a moment in time.

A snapshot of a blob has the same name as the base blob from which the snapshot is taken, with a DateTime value appended to indicate the time at which the snapshot was taken.

Parameters

metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata.

Keyword ArgumentsReturns

Blob-updated property dict (Snapshot ID, Etag, and last modified).

Return type

dict[str, Any]

Example:

Create a snapshot of the blob.
# Create a read-only snapshot of the blob at this point in time snapshot_blob = blob_client.create_snapshot() # Get the snapshot ID print(snapshot_blob.get('snapshot')) 
delete_blob(delete_snapshots: str = None, **kwargs: Any)None[source]

Marks the specified blob for deletion.

The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob() operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blob and retains the blob for a specified number of days. After the specified number of days, the blob’s data is removed from the service during garbage collection. Soft deleted blob is accessible through list_blobs() specifying include=[‘deleted’] option. Soft-deleted blob can be restored using undelete() operation.

Parameters

delete_snapshots (str) –

Required if the blob has associated snapshots. Values include:Keyword ArgumentsReturn type

None

Example:

Delete a blob.
blob_client.delete_blob() 
delete_immutability_policy(**kwargs: Any)None[source]

The Delete Immutability Policy operation deletes the immutability policy on the blob.

New in version 12.10.0: This operation was introduced in API version ‘2020-10-02’.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Key value pairs of blob tags.

Return type

Dict[str, str]

download_blob(offset: Optional[int] = None, length: Optional[int] = None, **kwargs: Any)StorageStreamDownloader[source]

Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream. Using chunks() returns an iterator which allows the user to iterate over the content in chunks.

ParametersKeyword ArgumentsReturns

A streaming object (StorageStreamDownloader)

Return type

StorageStreamDownloader

Example:

Download a blob.
with open(DEST_FILE, "wb") as my_blob: download_stream = blob_client.download_blob() my_blob.write(download_stream.readall()) 
exists(**kwargs: Any)bool[source]

Returns True if a blob exists with the defined parameters, and returns False otherwise.

Keyword ArgumentsReturns

boolean

classmethod from_blob_url(blob_url: str, credential: Optional[Any] = None, snapshot: Optional[Union[str, Dict[str, Any]]] = None, **kwargs: Any)azure.storage.blob._blob_client.BlobClient[source]

Create BlobClient from a blob url. This doesn’t support customized blob url with ‘/’ in blob name.

ParametersReturns

A Blob client.

Return type

BlobClient

classmethod from_connection_string(conn_str: str, container_name: str, blob_name: str, snapshot: Optional[str] = None, credential: Optional[Any] = None, **kwargs: Any)azure.storage.blob._blob_client.BlobClient[source]

Create BlobClient from a Connection String.

ParametersReturns

A Blob client.

Return type

BlobClient

Example:

Creating the BlobClient from a connection string.
from azure.storage.blob import BlobClient blob_client = BlobClient.from_connection_string( self.connection_string, container_name="mycontainer", blob_name="blobname.txt") 
get_account_information(**kwargs: Any)Dict[str, str][source]

Gets information related to the storage account in which the blob resides.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include ‘sku_name’ and ‘account_kind’.

Returns

A dict of account information (SKU and account type).

Return type

dict(str, str)

get_blob_properties(**kwargs: Any)BlobProperties[source]

Returns all user-defined metadata, standard HTTP properties, and system properties for the blob. It does not return the content of the blob.

Keyword ArgumentsReturns

BlobProperties

Return type

BlobProperties

Example:

Getting the properties for a blob.
properties = blob_client.get_blob_properties() 
get_blob_tags(**kwargs: Any)Dict[str, str][source]

The Get Tags operation enables users to get tags on a blob or specific blob version, or snapshot.

New in version 12.4.0: This operation was introduced in API version ‘2019-12-12’.

Keyword ArgumentsReturns

Key value pairs of blob tags.

Return type

Dict[str, str]

get_block_list(block_list_type: Optional[str] = 'committed', **kwargs: Any)Tuple[List[BlobBlock], List[BlobBlock]][source]

The Get Block List operation retrieves the list of blocks that have been uploaded as part of a block blob.

Parameters

block_list_type (str) – Specifies whether to return the list of committed blocks, the list of uncommitted blocks, or both lists together. Possible values include: ‘committed’, ‘uncommitted’, ‘all’

Keyword ArgumentsReturns

A tuple of two lists - committed and uncommitted blocks

Return type

tuple(list(BlobBlock), list(BlobBlock))

get_page_range_diff_for_managed_disk(previous_snapshot_url: str, offset: Optional[int] = None, length: Optional[int] = None, **kwargs)Tuple[List[Dict[str, int]], List[Dict[str, int]]][source]

Returns the list of valid page ranges for a managed disk or snapshot.

Note

This operation is only available for managed disk accounts.

New in version 12.2.0: This operation was introduced in API version ‘2019-07-07’.

ParametersKeyword ArgumentsReturns

A tuple of two lists of page ranges as dictionaries with ‘start’ and ‘end’ keys. The first element are filled page ranges, the 2nd element is cleared page ranges.

Return type

tuple(list(dict(str, str), list(dict(str, str))

get_page_ranges(offset: Optional[int] = None, length: Optional[int] = None, previous_snapshot_diff: Optional[Union[str, Dict[str, Any]]] = None, **kwargs)Tuple[List[Dict[str, int]], List[Dict[str, int]]][source]

Returns the list of valid page ranges for a Page Blob or snapshot of a page blob.

ParametersKeyword ArgumentsReturns

A tuple of two lists of page ranges as dictionaries with ‘start’ and ‘end’ keys. The first element are filled page ranges, the 2nd element is cleared page ranges.

Return type

tuple(list(dict(str, str), list(dict(str, str))

query_blob(query_expression: str, **kwargs: Any)BlobQueryReader[source]

Enables users to select/project on blob/or blob snapshot data by providing simple query expressions. This operations returns a BlobQueryReader, users need to use readall() or readinto() to get query data.

Parameters

query_expression (str) – Required. a query statement.

Keyword ArgumentsReturns

A streaming object (BlobQueryReader)

Return type

BlobQueryReader

Example:

select/project on blob/or blob snapshot data by providing simple query expressions.
errors = [] def on_error(error): errors.append(error) # upload the csv file blob_client = blob_service_client.get_blob_client(container_name, "csvfile") with open("./sample-blobs/quick_query.csv", "rb") as stream: blob_client.upload_blob(stream, overwrite=True) # select the second column of the csv file query_expression = "SELECT _2 from BlobStorage" input_format = DelimitedTextDialect(delimiter=',', quotechar='"', lineterminator='\n', escapechar="", has_header=False) output_format = DelimitedJsonDialect(delimiter='\n') reader = blob_client.query_blob(query_expression, on_error=on_error, blob_format=input_format, output_format=output_format) content = reader.readall() 
resize_blob(size: int, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Resizes a page blob to the specified size.

If the specified value is less than the current size of the blob, then all pages above the specified value are cleared.

Parameters

size (int) – Size used to resize blob. Maximum size for a page blob is up to 1 TB. The page blob size must be aligned to a 512-byte boundary.

Keyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

seal_append_blob(**kwargs)Dict[str, Union[str, datetime, int]][source]

The Seal operation seals the Append Blob to make it read-only.

New in version 12.4.0.

Keyword ArgumentsReturns

Blob-updated property dict (Etag, last modified, append offset, committed block count).

Return type

dict(str, Any)

set_blob_metadata(metadata: Optional[Dict[str, str]] = None, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Sets user-defined metadata for the blob as one or more name-value pairs.

Parameters

metadata (dict(str, str)) – Dict containing name and value pairs. Each call to this operation replaces all existing metadata attached to the blob. To remove all metadata from the blob, call this operation with no metadata headers.

Keyword ArgumentsReturns

Blob-updated property dict (Etag and last modified)

set_blob_tags(tags: Optional[Dict[str, str]] = None, **kwargs: Any)Dict[str, Any][source] The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.

Each call to this operation replaces all existing tags attached to the blob. To remove all tags from the blob, call this operation with no tags set.

New in version 12.4.0: This operation was introduced in API version ‘2019-12-12’.

Parameters

tags (dict(str, str)) – Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

Keyword ArgumentsReturns

Blob-updated property dict (Etag and last modified)

Return type

Dict[str, Any]

set_http_headers(content_settings: Optional[ContentSettings] = None, **kwargs: Any)None[source]

Sets system properties on the blob.

If one property is set for the content_settings, all properties will be overridden.

Parameters

content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

Keyword ArgumentsReturns

Blob-updated property dict (Etag and last modified)

Return type

Dict[str, Any]

set_immutability_policy()[source]

The Set Immutability Policy operation sets the immutability policy on the blob.

New in version 12.10.0: This operation was introduced in API version ‘2020-10-02’.

Parameters

immutability_policy (ImmutabilityPolicy) –

Specifies the immutability policy of a blob, blob snapshot or blob version.

New in version 12.10.0: This was introduced in API version ‘2020-10-02’.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Key value pairs of blob tags.

Return type

Dict[str, str]

set_legal_hold(legal_hold: bool, **kwargs: Any)Dict[str, Union[str, datetime, bool]][source]

The Set Legal Hold operation sets a legal hold on the blob.

New in version 12.10.0: This operation was introduced in API version ‘2020-10-02’.

Parameters

legal_hold (bool) – Specified if a legal hold should be set on the blob.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Key value pairs of blob tags.

Return type

Dict[str, Union[str, datetime, bool]]

set_premium_page_blob_tier(premium_page_blob_tier: Union[str, PremiumPageBlobTier], **kwargs: Any)None[source]

Sets the page blob tiers on the blob. This API is only supported for page blobs on premium accounts.

Parameters

premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

Keyword ArgumentsReturn type

None

set_sequence_number(sequence_number_action: Union[str, SequenceNumberAction], sequence_number: Optional[str] = None, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Sets the blob sequence number.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

set_standard_blob_tier(standard_blob_tier: Union[str, StandardBlobTier], **kwargs: Any)None[source]

This operation sets the tier on a block blob.

A block blob’s tier determines Hot/Cool/Archive storage type. This operation does not update the blob’s ETag.

Parameters

standard_blob_tier (str or StandardBlobTier) – Indicates the tier to be set on the blob. Options include ‘Hot’, ‘Cool’, ‘Archive’. The hot tier is optimized for storing data that is accessed frequently. The cool storage tier is optimized for storing data that is infrequently accessed and stored for at least a month. The archive tier is optimized for storing data that is rarely accessed and stored for at least six months with flexible latency requirements.

Keyword ArgumentsReturn type

None

stage_block(block_id: str, data: Union[Iterable[AnyStr], IO[AnyStr]], length: Optional[int] = None, **kwargs)Dict[str, Any][source]

Creates a new block to be committed as part of a blob.

ParametersKeyword ArgumentsReturns

Blob property dict.

Return type

dict[str, Any]

stage_block_from_url(block_id: Union[str, int], source_url: str, source_offset: Optional[int] = None, source_length: Optional[int] = None, source_content_md5: Optional[Union[bytes, bytearray]] = None, **kwargs)Dict[str, Any][source]

Creates a new block to be committed as part of a blob where the contents are read from a URL.

ParametersKeyword ArgumentsReturns

Blob property dict.

Return type

dict[str, Any]

start_copy_from_url(source_url: str, metadata: Optional[Dict[str, str]] = None, incremental_copy: bool = False, **kwargs: Any)Dict[str, Union[str, datetime]][source]

Copies a blob asynchronously.

This operation returns a copy operation object that can be used to wait on the completion of the operation, as well as check status or abort the copy operation. The Blob service copies blobs on a best-effort basis.

The source blob for a copy operation may be a block blob, an append blob, or a page blob. If the destination blob already exists, it must be of the same blob type as the source blob. Any existing destination blob will be overwritten. The destination blob cannot be modified while a copy operation is in progress.

When copying from a page blob, the Blob service creates a destination page blob of the source blob’s length, initially containing all zeroes. Then the source page ranges are enumerated, and non-empty ranges are copied.

For a block blob or an append blob, the Blob service creates a committed blob of zero length before returning from this operation. When copying from a block blob, all committed blocks and their block IDs are copied. Uncommitted blocks are not copied. At the end of the copy operation, the destination blob will have the same committed block count as the source.

When copying from an append blob, all committed blocks are copied. At the end of the copy operation, the destination blob will have the same committed block count as the source.

For all blob types, you can call status() on the returned polling object to check the status of the copy operation, or wait() to block until the operation is complete. The final blob will be committed when the copy completes.

ParametersKeyword ArgumentsReturns

A dictionary of copy properties (etag, last_modified, copy_id, copy_status).

Return type

dict[str, Union[str, datetime]]

Example:

Copy a blob from a URL.
# Get the blob client with the source blob source_blob = "https://www.gutenberg.org/files/59466/59466-0.txt" copied_blob = blob_service_client.get_blob_client("copyblobcontainer", '59466-0.txt') # start copy and check copy status copy = copied_blob.start_copy_from_url(source_blob) props = copied_blob.get_blob_properties() print(props.copy.status) 
undelete_blob(**kwargs: Any)None[source]

Restores soft-deleted blobs or snapshots.

Operation will only be successful if used within the specified number of days set in the delete retention policy.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Undeleting a blob.
# Undelete the blob before the retention policy expires blob_client.undelete_blob() 
upload_blob(data: Union[Iterable[AnyStr], IO[AnyStr]], blob_type: Union[str, BlobType] = <BlobType.BlockBlob: 'BlockBlob'>, length: Optional[int] = None, metadata: Optional[Dict[str, str]] = None, **kwargs)Any[source]

Creates a new blob from a data source with automatic chunking.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified)

Return type

dict[str, Any]

Example:

Upload a blob to the container.
# Upload content to block blob with open(SOURCE_FILE, "rb") as data: blob_client.upload_blob(data, blob_type="BlockBlob") 
upload_blob_from_url(source_url: str, **kwargs: Any)Dict[str, Any][source]

Creates a new Block Blob where the content of the blob is read from a given URL. The content of an existing blob is overwritten with the new blob.

Parameters

source_url (str) –

A URL of up to 2 KB in length that specifies a file or blob. The value should be URL-encoded as it would appear in a request URI. If the source is in another account, the source must either be public or must be authenticated via a shared access signature. If the source is public, no authentication is required. Examples: https://myaccount.blob.core.windows.net/mycontainer/myblob

https://myaccount.blob.core.windows.net/mycontainer/myblob?snapshot=<DateTime>

https://otheraccount.blob.core.windows.net/mycontainer/myblob?sastoken

Keyword Argumentsupload_page(page: bytes, offset: int, length: int, **kwargs)Dict[str, Union[str, datetime]][source]

The Upload Pages operation writes a range of pages to a page blob.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

upload_pages_from_url(source_url: str, offset: int, length: int, source_offset: int, **kwargs)Dict[str, Any][source]

The Upload Pages operation writes a range of pages to a page blob where the contents are read from a URL.

ParametersKeyword Argumentsproperty api_version

The version of the Storage API used for requests.

Type

str

property location_mode

The location mode that the client is currently using.

By default this will be “primary”. Options include “primary” and “secondary”.

Type

str

property primary_endpoint

The full primary endpoint URL.

Type

str

property primary_hostname

The hostname of the primary endpoint.

Type

str

property secondary_endpoint

The full secondary endpoint URL if configured.

If not available a ValueError will be raised. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str

Raises

ValueError

property secondary_hostname

The hostname of the secondary endpoint.

If not available this will be None. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str or None

property url

The full endpoint URL to this entity, including SAS token if used.

This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode().

class azure.storage.blob.BlobImmutabilityPolicyMode(value)[source]

Specifies the immutability policy mode to set on the blob. “Mutable” can only be returned by service, don’t set to “Mutable”.

Locked = 'Locked' Mutable = 'Mutable' Unlocked = 'Unlocked' class azure.storage.blob.BlobLeaseClient(client: Union[BlobClient, ContainerClient], lease_id: Optional[str] = None)[source]

Creates a new BlobLeaseClient.

This client provides lease operations on a BlobClient or ContainerClient.

VariablesParametersacquire(lease_duration: int = - 1, **kwargs: Any)None[source]

Requests a new lease.

If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.

Parameters

lease_duration (int) – Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. Default is -1 (infinite lease).

Keyword ArgumentsReturn type

None

break_lease(lease_break_period: Optional[int] = None, **kwargs: Any)int[source]

Break the lease, if the container or blob has an active lease.

Once a lease is broken, it cannot be renewed. Any authorized request can break the lease; the request is not required to specify a matching lease ID. When a lease is broken, the lease break period is allowed to elapse, during which time no lease operation except break and release can be performed on the container or blob. When a lease is successfully broken, the response indicates the interval in seconds until a new lease can be acquired.

Parameters

lease_break_period (int) – This is the proposed duration of seconds that the lease should continue before it is broken, between 0 and 60 seconds. This break period is only used if it is shorter than the time remaining on the lease. If longer, the time remaining on the lease is used. A new lease will not be available before the break period has expired, but the lease may be held for longer than the break period. If this header does not appear with a break operation, a fixed-duration lease breaks after the remaining lease period elapses, and an infinite lease breaks immediately.

Keyword ArgumentsReturns

Approximate time remaining in the lease period, in seconds.

Return type

int

change(proposed_lease_id: str, **kwargs: Any)None[source]

Change the lease ID of an active lease.

Parameters

proposed_lease_id (str) – Proposed lease ID, in a GUID string format. The Blob service returns 400 (Invalid request) if the proposed lease ID is not in the correct format.

Keyword ArgumentsReturns

None

release(**kwargs: Any)None[source]

Release the lease.

The lease may be released if the client lease id specified matches that associated with the container or blob. Releasing the lease allows another client to immediately acquire the lease for the container or blob as soon as the release is complete.

Keyword ArgumentsReturns

None

renew(**kwargs: Any)None[source]

Renews the lease.

The lease can be renewed if the lease ID specified in the lease client matches that associated with the container or blob. Note that the lease may be renewed even if it has expired as long as the container or blob has not been leased again since the expiration of that lease. When you renew a lease, the lease duration clock resets.

Keyword ArgumentsReturns

None

class azure.storage.blob.BlobPrefix(*args, **kwargs)[source]

An Iterable of Blob properties.

Returned from walk_blobs when a delimiter is used. Can be thought of as a virtual blob directory.

VariablesParameters

Return an iterator of items.

args and kwargs will be passed to the PageIterator constructor directly, except page_iterator_class

by_page(continuation_token: Optional[str] = None)Iterator[Iterator[ReturnType]]

Get an iterator of pages of objects, instead of an iterator of objects.

Parameters

continuation_token (str) – An opaque continuation token. This value can be retrieved from the continuation_token field of a previous generator object. If specified, this generator will begin returning results from this point.

Returns

An iterator of pages (themselves iterator of objects)

get(key, default=None) has_key(k) items() keys() next()

Return the next item from the iterator. When exhausted, raise StopIteration

update(*args, **kwargs) values() class azure.storage.blob.BlobProperties(**kwargs)[source]

Blob Properties.

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.BlobQueryError(error=None, is_fatal=False, description=None, position=None)[source]

The error happened during quick query operation.

Variablesclass azure.storage.blob.BlobQueryReader(name=None, container=None, errors=None, record_delimiter='\n', encoding=None, headers=None, response=None, error_cls=None)[source]

A streaming object to read query results.

Variablesreadall()Union[bytes, str][source]

Return all query results.

This operation is blocking until all data is downloaded. If encoding has been configured - this will be used to decode individual records are they are received.

Return type

Union[bytes, str]

readinto(stream: IO)None[source]

Download the query result to a stream.

Parameters

stream – The stream to download to. This can be an open file-handle, or any writable stream.

Returns

None

records()Iterable[Union[bytes, str]][source]

Returns a record generator for the query result.

Records will be returned line by line. If encoding has been configured - this will be used to decode individual records are they are received.

Return type

Iterable[Union[bytes, str]]

class azure.storage.blob.BlobSasPermissions(read=False, add=False, create=False, write=False, delete=False, delete_previous_version=False, tag=True, **kwargs)[source]

BlobSasPermissions class to be used with the generate_blob_sas() function.

ParametersKeyword Arguments

set_immutability_policy (bool) – To enable operations related to set/delete immutability policy. To get immutability policy, you just need read permission.

classmethod from_string(permission)[source]

Create a BlobSasPermissions from a string.

To specify read, add, create, write, or delete permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string “rw”.

Parameters

permission (str) – The string which dictates the read, add, create, write, or delete permissions.

Returns

A BlobSasPermissions object

Return type

BlobSasPermissions

class azure.storage.blob.BlobServiceClient(account_url: str, credential: Optional[Any] = None, **kwargs: Any)[source]

A client to interact with the Blob Service at the account level.

This client provides operations to retrieve and configure the account properties as well as list, create and delete containers within the account. For operations relating to a specific container or blob, clients for those entities can also be retrieved using the get_client functions.

For more optional configuration, please click here.

ParametersKeyword Arguments

Example:

Creating the BlobServiceClient with account url and credential.
from azure.storage.blob import BlobServiceClient blob_service_client = BlobServiceClient(account_url=self.url, credential=self.shared_access_key) 
Creating the BlobServiceClient with Azure Identity credentials.
# Get a token credential for authentication from azure.identity import ClientSecretCredential token_credential = ClientSecretCredential( self.active_directory_tenant_id, self.active_directory_application_id, self.active_directory_application_secret ) # Instantiate a BlobServiceClient using a token credential from azure.storage.blob import BlobServiceClient blob_service_client = BlobServiceClient(account_url=self.oauth_url, credential=token_credential) 
close()

This method is to close the sockets opened by the client. It need not be used when using with a context manager.

create_container(name: str, metadata: Optional[Dict[str, str]] = None, public_access: Optional[Union[PublicAccess, str]] = None, **kwargs)ContainerClient[source]

Creates a new container under the specified account.

If the container with the same name already exists, a ResourceExistsError will be raised. This method returns a client with which to interact with the newly created container.

ParametersKeyword ArgumentsReturn type

ContainerClient

Example:

Creating a container in the blob service.
try: new_container = blob_service_client.create_container("containerfromblobservice") properties = new_container.get_container_properties() except ResourceExistsError: print("Container already exists.") 
delete_container(container: Union[ContainerProperties, str], lease: Optional[Union[BlobLeaseClient, str]] = None, **kwargs)None[source]

Marks the specified container for deletion.

The container and any blobs contained within it are later deleted during garbage collection. If the container is not found, a ResourceNotFoundError will be raised.

ParametersKeyword ArgumentsReturn type

None

Example:

Deleting a container in the blob service.
# Delete container if it exists try: blob_service_client.delete_container("containerfromblobservice") except ResourceNotFoundError: print("Container already deleted.") 
find_blobs_by_tags(filter_expression: str, **kwargs: Any)ItemPaged[FilteredBlob][source]

The Filter Blobs operation enables callers to list blobs across all containers whose tags match a given search expression. Filter blobs searches across all containers within a storage account but can be scoped within the expression to a single container.

Parameters

filter_expression (str) – The expression to find blobs whose tags matches the specified condition. eg. “”yourtagname”=’firsttag’ and “yourtagname2”=’secondtag’” To specify a container, eg. “@container=’containerName’ and “Name”=’C’”

Keyword ArgumentsReturns

An iterable (auto-paging) response of BlobProperties.

Return type

ItemPaged[FilteredBlob]

classmethod from_connection_string(conn_str: str, credential: Optional[Any] = None, **kwargs: Any)azure.storage.blob._blob_service_client.BlobServiceClient[source]

Create BlobServiceClient from a Connection String.

ParametersReturns

A Blob service client.

Return type

BlobServiceClient

Example:

Creating the BlobServiceClient from a connection string.
from azure.storage.blob import BlobServiceClient blob_service_client = BlobServiceClient.from_connection_string(self.connection_string) 
get_account_information(**kwargs: Any)Dict[str, str][source]

Gets information related to the storage account.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include ‘sku_name’ and ‘account_kind’.

Returns

A dict of account information (SKU and account type).

Return type

dict(str, str)

Example:

Getting account information for the blob service.
account_info = blob_service_client.get_account_information() print('Using Storage SKU: {}'.format(account_info['sku_name'])) 
get_blob_client(container: Union[ContainerProperties, str], blob: Union[BlobProperties, str], snapshot: Optional[Union[Dict[str, Any], str]] = None)BlobClient[source]

Get a client to interact with the specified blob.

The blob need not already exist.

ParametersReturns

A BlobClient.

Return type

BlobClient

Example:

Getting the blob client to interact with a specific blob.
blob_client = blob_service_client.get_blob_client(container="containertest", blob="my_blob") try: stream = blob_client.download_blob() except ResourceNotFoundError: print("No blob found.") 
get_container_client(container: Union[ContainerProperties, str])ContainerClient[source]

Get a client to interact with the specified container.

The container need not already exist.

Parameters

container (str or ContainerProperties) – The container. This can either be the name of the container, or an instance of ContainerProperties.

Returns

A ContainerClient.

Return type

ContainerClient

Example:

Getting the container client to interact with a specific container.
# Get a client to interact with a specific container - though it may not yet exist container_client = blob_service_client.get_container_client("containertest") try: for blob in container_client.list_blobs(): print("Found blob: ", blob.name) except ResourceNotFoundError: print("Container not found.") 
get_service_properties(**kwargs: Any)Dict[str, Any][source]

Gets the properties of a storage account’s Blob service, including Azure Storage Analytics.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

An object containing blob service properties such as analytics logging, hour/minute metrics, cors rules, etc.

Return type

Dict[str, Any]

Example:

Getting service properties for the blob service.
properties = blob_service_client.get_service_properties() 
get_service_stats(**kwargs: Any)Dict[str, Any][source]

Retrieves statistics related to replication for the Blob service.

It is only available when read-access geo-redundant replication is enabled for the storage account.

With geo-redundant replication, Azure Storage maintains your data durable in two locations. In both locations, Azure Storage constantly maintains multiple healthy replicas of your data. The location where you read, create, update, or delete data is the primary storage account location. The primary location exists in the region you choose at the time you create an account via the Azure Management Azure classic portal, for example, North Central US. The location to which your data is replicated is the secondary location. The secondary location is automatically determined based on the location of the primary; it is in a second data center that resides in the same region as the primary location. Read-only access is available from the secondary location, if read-access geo-redundant replication is enabled for your storage account.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

The blob service stats.

Return type

Dict[str, Any]

Example:

Getting service stats for the blob service.
stats = blob_service_client.get_service_stats() 
get_user_delegation_key(key_start_time: datetime, key_expiry_time: datetime, **kwargs: Any)UserDelegationKey[source]

Obtain a user delegation key for the purpose of signing SAS tokens. A token credential must be present on the service object for this request to succeed.

ParametersKeyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

The user delegation key.

Return type

UserDelegationKey

list_containers(name_starts_with: Optional[str] = None, include_metadata: Optional[bool] = False, **kwargs)ItemPaged[ContainerProperties][source]

Returns a generator to list the containers under the specified account.

The generator will lazily follow the continuation tokens returned by the service and stop when all containers have been returned.

ParametersKeyword ArgumentsReturns

An iterable (auto-paging) of ContainerProperties.

Return type

ItemPaged[ContainerProperties]

Example:

Listing the containers in the blob service.
# List all containers all_containers = blob_service_client.list_containers(include_metadata=True) for container in all_containers: print(container['name'], container['metadata']) # Filter results with name prefix test_containers = blob_service_client.list_containers(name_starts_with='test-') for container in test_containers: print(container['name'], container['metadata']) 
set_service_properties(analytics_logging: Optional[BlobAnalyticsLogging] = None, hour_metrics: Optional[Metrics] = None, minute_metrics: Optional[Metrics] = None, cors: Optional[List[CorsRule]] = None, target_version: Optional[str] = None, delete_retention_policy: Optional[RetentionPolicy] = None, static_website: Optional[StaticWebsite] = None, **kwargs)None[source]

Sets the properties of a storage account’s Blob service, including Azure Storage Analytics.

If an element (e.g. analytics_logging) is left as None, the existing settings on the service for that functionality are preserved.

ParametersKeyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Setting service properties for the blob service.
# Create service properties from azure.storage.blob import BlobAnalyticsLogging, Metrics, CorsRule, RetentionPolicy # Create logging settings logging = BlobAnalyticsLogging(read=True, write=True, delete=True, retention_policy=RetentionPolicy(enabled=True, days=5)) # Create metrics for requests statistics hour_metrics = Metrics(enabled=True, include_apis=True, retention_policy=RetentionPolicy(enabled=True, days=5)) minute_metrics = Metrics(enabled=True, include_apis=True, retention_policy=RetentionPolicy(enabled=True, days=5)) # Create CORS rules cors_rule = CorsRule(['www.xyz.com'], ['GET']) cors = [cors_rule] # Set the service properties blob_service_client.set_service_properties(logging, hour_metrics, minute_metrics, cors) 
undelete_container(deleted_container_name: str, deleted_container_version: str, **kwargs: Any)ContainerClient[source]

Restores soft-deleted container.

Operation will only be successful if used within the specified number of days set in the delete retention policy.

New in version 12.4.0: This operation was introduced in API version ‘2019-12-12’.

ParametersKeyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

ContainerClient

property api_version

The version of the Storage API used for requests.

Type

str

property location_mode

The location mode that the client is currently using.

By default this will be “primary”. Options include “primary” and “secondary”.

Type

str

property primary_endpoint

The full primary endpoint URL.

Type

str

property primary_hostname

The hostname of the primary endpoint.

Type

str

property secondary_endpoint

The full secondary endpoint URL if configured.

If not available a ValueError will be raised. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str

Raises

ValueError

property secondary_hostname

The hostname of the secondary endpoint.

If not available this will be None. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str or None

property url

The full endpoint URL to this entity, including SAS token if used.

This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode().

class azure.storage.blob.BlobType(value)[source]

An enumeration.

AppendBlob = 'AppendBlob' BlockBlob = 'BlockBlob' PageBlob = 'PageBlob' class azure.storage.blob.BlockState(value)[source]

Block blob block types.

Committed = 'Committed'

Committed blocks.

Latest = 'Latest'

Latest blocks.

Uncommitted = 'Uncommitted'

Uncommitted blocks.

class azure.storage.blob.ContainerClient(account_url: str, container_name: str, credential: Optional[Any] = None, **kwargs: Any)[source]

A client to interact with a specific container, although that container may not yet exist.

For operations relating to a specific blob within this container, a blob client can be retrieved using the get_blob_client() function.

For more optional configuration, please click here.

ParametersKeyword Arguments

Example:

Get a ContainerClient from an existing BlobServiceClient.
# Instantiate a BlobServiceClient using a connection string from azure.storage.blob import BlobServiceClient blob_service_client = BlobServiceClient.from_connection_string(self.connection_string) # Instantiate a ContainerClient container_client = blob_service_client.get_container_client("mynewcontainer") 
Creating the container client directly.
from azure.storage.blob import ContainerClient sas_url = "https://account.blob.core.windows.net/mycontainer?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D" container = ContainerClient.from_container_url(sas_url) 
acquire_lease(lease_duration: int = - 1, lease_id: Optional[str] = None, **kwargs)BlobLeaseClient[source]

Requests a new lease. If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.

ParametersKeyword ArgumentsReturns

A BlobLeaseClient object, that can be run in a context manager.

Return type

BlobLeaseClient

Example:

Acquiring a lease on the container.
# Acquire a lease on the container lease = container_client.acquire_lease() # Delete container by passing in the lease container_client.delete_container(lease=lease) 
close()

This method is to close the sockets opened by the client. It need not be used when using with a context manager.

create_container(metadata: Optional[Dict[str, str]] = None, public_access: Optional[Union[PublicAccess, str]] = None, **kwargs: Any)None[source]

Creates a new container under the specified account. If the container with the same name already exists, the operation fails.

ParametersKeyword ArgumentsReturn type

None

Example:

Creating a container to store blobs.
container_client.create_container() 
delete_blob(blob: Union[str, BlobProperties], delete_snapshots: Optional[str] = None, **kwargs)None[source]

Marks the specified blob or snapshot for deletion.

The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blob or snapshot and retains the blob or snapshot for specified number of days. After specified number of days, blob’s data is removed from the service during garbage collection. Soft deleted blob or snapshot is accessible through list_blobs() specifying include=[“deleted”] option. Soft-deleted blob or snapshot can be restored using undelete()

ParametersKeyword ArgumentsReturn type

None

delete_blobs(*blobs, **kwargs)Iterator[HttpResponse][source]

Marks the specified blobs or snapshots for deletion.

The blobs are later deleted during garbage collection. Note that in order to delete blobs, you must delete all of their snapshots. You can delete both at the same time with the delete_blobs operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blobs or snapshots and retains the blobs or snapshots for specified number of days. After specified number of days, blobs’ data is removed from the service during garbage collection. Soft deleted blobs or snapshots are accessible through list_blobs() specifying include=[“deleted”] Soft-deleted blobs or snapshots can be restored using undelete()

Parameters

blobs (list[str], list[dict], or list[BlobProperties]) –

The blobs to delete. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

Note

When the blob type is dict, here’s a list of keys, value rules.

blob name:

key: ‘name’, value type: str

snapshot you want to delete:

key: ‘snapshot’, value type: str

whether to delete snapthots when deleting blob:

key: ‘delete_snapshots’, value: ‘include’ or ‘only’

if the blob modified or not:

key: ‘if_modified_since’, ‘if_unmodified_since’, value type: datetime

etag:

key: ‘etag’, value type: str

match the etag or not:

key: ‘match_condition’, value type: MatchConditions

tags match condition:

key: ‘if_tags_match_condition’, value type: str

lease:

key: ‘lease_id’, value type: Union[str, LeaseClient]

timeout for subrequest:

key: ‘timeout’, value type: int

Keyword ArgumentsReturns

An iterator of responses, one for each blob in order

Return type

Iterator[HttpResponse]

Example:

Deleting multiple blobs.
# Delete multiple blobs in the container by name container_client.delete_blobs("my_blob1", "my_blob2") # Delete multiple blobs by properties iterator my_blobs = container_client.list_blobs(name_starts_with="my_blob") container_client.delete_blobs(*my_blobs) 
delete_container(**kwargs: Any)None[source]

Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

Keyword ArgumentsReturn type

None

Example:

Delete a container.
container_client.delete_container() 
download_blob(blob: Union[str, BlobProperties], offset: Optional[int] = None, length: Optional[int] = None, **kwargs: Any)StorageStreamDownloader[source]

Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream. Using chunks() returns an iterator which allows the user to iterate over the content in chunks.

ParametersKeyword ArgumentsReturns

A streaming object (StorageStreamDownloader)

Return type

StorageStreamDownloader

exists(**kwargs: Any)bool[source]

Returns True if a container exists and returns False otherwise.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

boolean

classmethod from_connection_string(conn_str: str, container_name: str, credential: Optional[Any] = None, **kwargs: Any)azure.storage.blob._container_client.ContainerClient[source]

Create ContainerClient from a Connection String.

ParametersReturns

A container client.

Return type

ContainerClient

Example:

Creating the ContainerClient from a connection string.
from azure.storage.blob import ContainerClient container_client = ContainerClient.from_connection_string( self.connection_string, container_name="mycontainer") 
classmethod from_container_url(container_url: str, credential: Optional[Any] = None, **kwargs: Any)azure.storage.blob._container_client.ContainerClient[source]

Create ContainerClient from a container url.

ParametersReturns

A container client.

Return type

ContainerClient

get_account_information(**kwargs: Any)Dict[str, str][source]

Gets information related to the storage account.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include ‘sku_name’ and ‘account_kind’.

Returns

A dict of account information (SKU and account type).

Return type

dict(str, str)

get_blob_client(blob: Union[str, azure.storage.blob._models.BlobProperties], snapshot: Optional[str] = None)azure.storage.blob._blob_client.BlobClient[source]

Get a client to interact with the specified blob.

The blob need not already exist.

ParametersReturns

A BlobClient.

Return type

BlobClient

Example:

Get the blob client.
# Get the BlobClient from the ContainerClient to interact with a specific blob blob_client = container_client.get_blob_client("mynewblob") 
get_container_access_policy(**kwargs: Any)Dict[str, Any][source]

Gets the permissions for the specified container. The permissions indicate whether container data may be accessed publicly.

Keyword ArgumentsReturns

Access policy information in a dict.

Return type

dict[str, Any]

Example:

Getting the access policy on the container.
policy = container_client.get_container_access_policy() 
get_container_properties(**kwargs: Any)ContainerProperties[source]

Returns all user-defined metadata and system properties for the specified container. The data returned does not include the container’s list of blobs.

Keyword ArgumentsReturns

Properties for the specified container within a container object.

Return type

ContainerProperties

Example:

Getting properties on the container.
properties = container_client.get_container_properties() 
list_blobs(name_starts_with: Optional[str] = None, include: Optional[Union[str, List[str]]] = None, **kwargs: Any)ItemPaged[BlobProperties][source]

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service.

ParametersKeyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type

ItemPaged[BlobProperties]

Example:

List the blobs in the container.
blobs_list = container_client.list_blobs() for blob in blobs_list: print(blob.name + '\n') 
set_container_access_policy(signed_identifiers: Dict[str, AccessPolicy], public_access: Optional[Union[str, PublicAccess]] = None, **kwargs)Dict[str, Union[str, datetime]][source]

Sets the permissions for the specified container or stored access policies that may be used with Shared Access Signatures. The permissions indicate whether blobs in a container may be accessed publicly.

ParametersKeyword ArgumentsReturns

Container-updated property dict (Etag and last modified).

Return type

dict[str, str or datetime]

Example:

Setting access policy on the container.
# Create access policy from azure.storage.blob import AccessPolicy, ContainerSasPermissions access_policy = AccessPolicy(permission=ContainerSasPermissions(read=True), expiry=datetime.utcnow() + timedelta(hours=1), start=datetime.utcnow() - timedelta(minutes=1)) identifiers = {'test': access_policy} # Set the access policy on the container container_client.set_container_access_policy(signed_identifiers=identifiers) 
set_container_metadata(metadata: Optional[Dict[str, str]] = None, **kwargs)Dict[str, Union[str, datetime]][source]

Sets one or more user-defined name-value pairs for the specified container. Each call to this operation replaces all existing metadata attached to the container. To remove all metadata from the container, call this operation with no metadata dict.

Parameters

metadata (dict[str, str]) – A dict containing name-value pairs to associate with the container as metadata. Example: {‘category’:’test’}

Keyword ArgumentsReturns

Container-updated property dict (Etag and last modified).

Return type

dict[str, str or datetime]

Example:

Setting metadata on the container.
# Create key, value pairs for metadata metadata = {'type': 'test'} # Set metadata on the container container_client.set_container_metadata(metadata=metadata) 
set_premium_page_blob_tier_blobs(premium_page_blob_tier: Optional[Union[str, PremiumPageBlobTier]], *blobs: List[Union[str, BlobProperties, dict]], **kwargs)Iterator[HttpResponse][source]

Sets the page blob tiers on all blobs. This API is only supported for page blobs on premium accounts.

ParametersKeyword ArgumentsReturns

An iterator of responses, one for each blob in order

Return type

iterator[HttpResponse]

set_standard_blob_tier_blobs(standard_blob_tier: Optional[Union[str, StandardBlobTier]], *blobs: List[Union[str, BlobProperties, dict]], **kwargs)Iterator[HttpResponse][source]

This operation sets the tier on block blobs.

A block blob’s tier determines Hot/Cool/Archive storage type. This operation does not update the blob’s ETag.

ParametersKeyword ArgumentsReturns

An iterator of responses, one for each blob in order

Return type

Iterator[HttpResponse]

upload_blob(name: Union[str, BlobProperties], data: Union[Iterable[AnyStr], IO[AnyStr]], blob_type: Union[str, BlobType] = <BlobType.BlockBlob: 'BlockBlob'>, length: Optional[int] = None, metadata: Optional[Dict[str, str]] = None, **kwargs)BlobClient[source]

Creates a new blob from a data source with automatic chunking.

ParametersKeyword ArgumentsReturns

A BlobClient to interact with the newly uploaded blob.

Return type

BlobClient

Example:

Upload blob to the container.
with open(SOURCE_FILE, "rb") as data: blob_client = container_client.upload_blob(name="myblob", data=data) properties = blob_client.get_blob_properties() 
walk_blobs(name_starts_with: Optional[str] = None, include: Optional[Any] = None, delimiter: str = '/', **kwargs: Optional[Any])ItemPaged[BlobProperties][source]

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. This operation will list blobs in accordance with a hierarchy, as delimited by the specified delimiter character.

ParametersKeyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type

ItemPaged[BlobProperties]

property api_version

The version of the Storage API used for requests.

Type

str

property location_mode

The location mode that the client is currently using.

By default this will be “primary”. Options include “primary” and “secondary”.

Type

str

property primary_endpoint

The full primary endpoint URL.

Type

str

property primary_hostname

The hostname of the primary endpoint.

Type

str

property secondary_endpoint

The full secondary endpoint URL if configured.

If not available a ValueError will be raised. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str

Raises

ValueError

property secondary_hostname

The hostname of the secondary endpoint.

If not available this will be None. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str or None

property url

The full endpoint URL to this entity, including SAS token if used.

This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode().

class azure.storage.blob.ContainerEncryptionScope(default_encryption_scope, **kwargs)[source]

The default encryption scope configuration for a container.

This scope is used implicitly for all future writes within the container, but can be overridden per blob operation.

New in version 12.2.0.

Parametersclass azure.storage.blob.ContainerProperties(**kwargs)[source]

Blob container’s properties class.

Returned ContainerProperties instances expose these values through a dictionary interface, for example: container_props["last_modified"]. Additionally, the container name is available as container_props["name"].

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.ContainerSasPermissions(read=False, write=False, delete=False, list=False, delete_previous_version=False, tag=False, **kwargs)[source]

ContainerSasPermissions class to be used with the generate_container_sas() function and for the AccessPolicies used with set_container_access_policy().

ParametersKeyword Arguments

set_immutability_policy (bool) – To enable operations related to set/delete immutability policy. To get immutability policy, you just need read permission.

classmethod from_string(permission)[source]

Create a ContainerSasPermissions from a string.

To specify read, write, delete, or list permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string “rw”.

Parameters

permission (str) – The string which dictates the read, write, delete, and list permissions.

Returns

A ContainerSasPermissions object

Return type

ContainerSasPermissions

class azure.storage.blob.ContentSettings(content_type=None, content_encoding=None, content_language=None, content_disposition=None, cache_control=None, content_md5=None, **kwargs)[source]

The content settings of a blob.

Parametersget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.CopyProperties(**kwargs)[source]

Blob Copy Properties.

These properties will be None if this blob has never been the destination in a Copy Blob operation, or if this blob has been modified after a concluded Copy Blob operation, for example, using Set Blob Properties, Upload Blob, or Commit Block List.

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.CorsRule(allowed_origins, allowed_methods, **kwargs)[source]

CORS is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain.

ParametersKeyword Argumentsas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.CustomerProvidedEncryptionKey(key_value, key_hash)[source]

All data in Azure Storage is encrypted at-rest using an account-level encryption key. In versions 2018-06-17 and newer, you can manage the key used to encrypt blob contents and application metadata per-blob by providing an AES-256 encryption key in requests to the storage service.

When you use a customer-provided key, Azure Storage does not manage or persist your key. When writing data to a blob, the provided key is used to encrypt your data before writing it to disk. A SHA-256 hash of the encryption key is written alongside the blob contents, and is used to verify that all subsequent operations against the blob use the same encryption key. This hash cannot be used to retrieve the encryption key or decrypt the contents of the blob. When reading a blob, the provided key is used to decrypt your data after reading it from disk. In both cases, the provided encryption key is securely discarded as soon as the encryption or decryption process completes.

ParametersVariables

algorithm (str) – Specifies the algorithm to use when encrypting data using the given key. Must be AES256.

class azure.storage.blob.DelimitedJsonDialect(**kwargs)[source]

Defines the input or output JSON serialization for a blob data query.

keyword str delimiter

The line separator character, default value is ‘

get(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.DelimitedTextDialect(**kwargs)[source]

Defines the input or output delimited (CSV) serialization for a blob query request.

keyword str delimiter

Column separator, defaults to ‘,’.

keyword str quotechar

Field quote, defaults to ‘”’.

keyword str lineterminator

Record separator, defaults to ‘

‘. keyword str escapechar

Escape char, defaults to empty.

keyword bool has_header

Whether the blob data includes headers in the first line. The default value is False, meaning that the data will be returned inclusive of the first line. If set to True, the data will be returned exclusive of the first line.

get(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.ExponentialRetry(initial_backoff=15, increment_base=3, retry_total=3, retry_to_secondary=False, random_jitter_range=3, **kwargs)[source]

Exponential retry.

Constructs an Exponential retry object. The initial_backoff is used for the first retry. Subsequent retries are retried after initial_backoff + increment_power^retry_count seconds.

Parametersconfigure_retries(request) get_backoff_time(settings)[source]

Calculates how long to sleep before retrying.

Returns

An integer indicating how long to wait before retrying the request, or None to indicate no retry should be performed.

Return type

int or None

increment(settings, request, response=None, error=None)

Increment the retry counters.

ParametersReturns

Whether the retry attempts are exhausted.

send(request)

Abstract send method for a synchronous pipeline. Mutates the request.

Context content is dependent on the HttpTransport.

Parameters

request (PipelineRequest) – The pipeline request object

Returns

The pipeline response object.

Return type

PipelineResponse

sleep(settings, transport) class azure.storage.blob.FilteredBlob(**kwargs)[source]

Blob info from a Filter Blobs API call.

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.ImmutabilityPolicy(**kwargs)[source]

Optional parameters for setting the immutability policy of a blob, blob snapshot or blob version.

New in version 12.10.0: This was introduced in API version ‘2020-10-02’.

Keyword Argumentsget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.LeaseProperties(**kwargs)[source]

Blob Lease Properties.

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.LinearRetry(backoff=15, retry_total=3, retry_to_secondary=False, random_jitter_range=3, **kwargs)[source]

Linear retry.

Constructs a Linear retry object.

Parametersconfigure_retries(request) get_backoff_time(settings)[source]

Calculates how long to sleep before retrying.

Returns

An integer indicating how long to wait before retrying the request, or None to indicate no retry should be performed.

Return type

int or None

increment(settings, request, response=None, error=None)

Increment the retry counters.

ParametersReturns

Whether the retry attempts are exhausted.

send(request)

Abstract send method for a synchronous pipeline. Mutates the request.

Context content is dependent on the HttpTransport.

Parameters

request (PipelineRequest) – The pipeline request object

Returns

The pipeline response object.

Return type

PipelineResponse

sleep(settings, transport) class azure.storage.blob.LocationMode[source]

Specifies the location the request should be sent to. This mode only applies for RA-GRS accounts which allow secondary read access. All other account types must use PRIMARY.

PRIMARY = 'primary'

Requests should be sent to the primary location.

SECONDARY = 'secondary'

Requests should be sent to the secondary location, if possible.

class azure.storage.blob.Metrics(**kwargs)[source]

A summary of request statistics grouped by API in hour or minute aggregates for blobs.

Keyword Argumentsas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.ObjectReplicationPolicy(**kwargs)[source]

Policy id and rule ids applied to a blob.

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.ObjectReplicationRule(**kwargs)[source]

Policy id and rule ids applied to a blob.

Variablesget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.PageRange(start=None, end=None)[source]

Page Range for page blob.

Parametersget(key, default=None) has_key(k) items() keys() update(*args, **kwargs) values() class azure.storage.blob.PremiumPageBlobTier(value)[source]

Specifies the page blob tier to set the blob to. This is only applicable to page blobs on premium storage accounts. Please take a look at: https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage#scalability-and-performance-targets for detailed information on the corresponding IOPS and throughput per PageBlobTier.

P10 = 'P10'

P10 Tier

P20 = 'P20'

P20 Tier

P30 = 'P30'

P30 Tier

P4 = 'P4'

P4 Tier

P40 = 'P40'

P40 Tier

P50 = 'P50'

P50 Tier

P6 = 'P6'

P6 Tier

P60 = 'P60'

P60 Tier

class azure.storage.blob.PublicAccess(value)[source]

Specifies whether data in the container may be accessed publicly and the level of access.

Blob = 'blob'

Specifies public read access for blobs. Blob data within this container can be read via anonymous request, but container data is not available. Clients cannot enumerate blobs within the container via anonymous request.

Container = 'container'

Specifies full public read access for container and blob data. Clients can enumerate blobs within the container via anonymous request, but cannot enumerate containers within the storage account.

OFF = 'off'

Specifies that there is no public read access for both the container and blobs within the container. Clients cannot enumerate the containers within the storage account as well as the blobs within the container.

class azure.storage.blob.QuickQueryDialect(value)[source]

Specifies the quick query input/output dialect.

DelimitedJson = 'DelimitedJsonDialect' DelimitedText = 'DelimitedTextDialect' Parquet = 'ParquetDialect' class azure.storage.blob.RehydratePriority(value)[source]

If an object is in rehydrate pending state then this header is returned with priority of rehydrate. Valid values are High and Standard.

HIGH = 'High' STANDARD = 'Standard' class azure.storage.blob.ResourceTypes(service=False, container=False, object=False)[source]

Specifies the resource types that are accessible with the account SAS.

Parametersclassmethod from_string(string)[source]

Create a ResourceTypes from a string.

To specify service, container, or object you need only to include the first letter of the word in the string. E.g. service and container, you would provide a string “sc”.

Parameters

string (str) – Specify service, container, or object in in the string with the first letter of the word.

Returns

A ResourceTypes object

Return type

ResourceTypes

class azure.storage.blob.RetentionPolicy(enabled=False, days=None)[source]

The retention policy which determines how long the associated data should persist.

Parametersas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.SequenceNumberAction(value)[source]

Sequence number actions.

Increment = 'increment'

Increments the value of the sequence number by 1. If specifying this option, do not include the x-ms-blob-sequence-number header.

Max = 'max'

Sets the sequence number to be the higher of the value included with the request and the value currently stored for the blob.

Update = 'update'

Sets the sequence number to the value included with the request.

class azure.storage.blob.StandardBlobTier(value)[source]

Specifies the blob tier to set the blob to. This is only applicable for block blobs on standard storage accounts.

Archive = 'Archive'

Archive

Cool = 'Cool'

Cool

Hot = 'Hot'

Hot

class azure.storage.blob.StaticWebsite(**kwargs)[source]

The properties that enable an account to host a static website.

Keyword Argumentsas_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending() classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

ParametersReturns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model() serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.StorageErrorCode(value)[source]

An enumeration.

account_already_exists = 'AccountAlreadyExists' account_being_created = 'AccountBeingCreated' account_is_disabled = 'AccountIsDisabled' append_position_condition_not_met = 'AppendPositionConditionNotMet' authentication_failed = 'AuthenticationFailed' authorization_failure = 'AuthorizationFailure' blob_already_exists = 'BlobAlreadyExists' blob_archived = 'BlobArchived' blob_being_rehydrated = 'BlobBeingRehydrated' blob_not_archived = 'BlobNotArchived' blob_not_found = 'BlobNotFound' blob_overwritten = 'BlobOverwritten' blob_tier_inadequate_for_content_length = 'BlobTierInadequateForContentLength' block_count_exceeds_limit = 'BlockCountExceedsLimit' block_list_too_long = 'BlockListTooLong' cannot_change_to_lower_tier = 'CannotChangeToLowerTier' cannot_delete_file_or_directory = 'CannotDeleteFileOrDirectory' cannot_verify_copy_source = 'CannotVerifyCopySource' client_cache_flush_delay = 'ClientCacheFlushDelay' condition_headers_not_supported = 'ConditionHeadersNotSupported' condition_not_met = 'ConditionNotMet' container_already_exists = 'ContainerAlreadyExists' container_being_deleted = 'ContainerBeingDeleted' container_disabled = 'ContainerDisabled' container_not_found = 'ContainerNotFound' container_quota_downgrade_not_allowed = 'ContainerQuotaDowngradeNotAllowed' content_length_larger_than_tier_limit = 'ContentLengthLargerThanTierLimit' content_length_must_be_zero = 'ContentLengthMustBeZero' copy_across_accounts_not_supported = 'CopyAcrossAccountsNotSupported' copy_id_mismatch = 'CopyIdMismatch' delete_pending = 'DeletePending' destination_path_is_being_deleted = 'DestinationPathIsBeingDeleted' directory_not_empty = 'DirectoryNotEmpty' empty_metadata_key = 'EmptyMetadataKey' feature_version_mismatch = 'FeatureVersionMismatch' file_lock_conflict = 'FileLockConflict' file_system_already_exists = 'FilesystemAlreadyExists' file_system_being_deleted = 'FilesystemBeingDeleted' file_system_not_found = 'FilesystemNotFound' incremental_copy_blob_mismatch = 'IncrementalCopyBlobMismatch' incremental_copy_of_eralier_version_snapshot_not_allowed = 'IncrementalCopyOfEralierVersionSnapshotNotAllowed' incremental_copy_source_must_be_snapshot = 'IncrementalCopySourceMustBeSnapshot' infinite_lease_duration_required = 'InfiniteLeaseDurationRequired' insufficient_account_permissions = 'InsufficientAccountPermissions' internal_error = 'InternalError' invalid_authentication_info = 'InvalidAuthenticationInfo' invalid_blob_or_block = 'InvalidBlobOrBlock' invalid_blob_tier = 'InvalidBlobTier' invalid_blob_type = 'InvalidBlobType' invalid_block_id = 'InvalidBlockId' invalid_block_list = 'InvalidBlockList' invalid_destination_path = 'InvalidDestinationPath' invalid_file_or_directory_path_name = 'InvalidFileOrDirectoryPathName' invalid_flush_position = 'InvalidFlushPosition' invalid_header_value = 'InvalidHeaderValue' invalid_http_verb = 'InvalidHttpVerb' invalid_input = 'InvalidInput' invalid_marker = 'InvalidMarker' invalid_md5 = 'InvalidMd5' invalid_metadata = 'InvalidMetadata' invalid_operation = 'InvalidOperation' invalid_page_range = 'InvalidPageRange' invalid_property_name = 'InvalidPropertyName' invalid_query_parameter_value = 'InvalidQueryParameterValue' invalid_range = 'InvalidRange' invalid_rename_source_path = 'InvalidRenameSourcePath' invalid_resource_name = 'InvalidResourceName' invalid_source_blob_type = 'InvalidSourceBlobType' invalid_source_blob_url = 'InvalidSourceBlobUrl' invalid_source_or_destination_resource_type = 'InvalidSourceOrDestinationResourceType' invalid_source_uri = 'InvalidSourceUri' invalid_uri = 'InvalidUri' invalid_version_for_page_blob_operation = 'InvalidVersionForPageBlobOperation' invalid_xml_document = 'InvalidXmlDocument' invalid_xml_node_value = 'InvalidXmlNodeValue' lease_already_broken = 'LeaseAlreadyBroken' lease_already_present = 'LeaseAlreadyPresent' lease_id_mismatch_with_blob_operation = 'LeaseIdMismatchWithBlobOperation' lease_id_mismatch_with_container_operation = 'LeaseIdMismatchWithContainerOperation' lease_id_mismatch_with_lease_operation = 'LeaseIdMismatchWithLeaseOperation' lease_id_missing = 'LeaseIdMissing' lease_is_already_broken = 'LeaseIsAlreadyBroken' lease_is_breaking_and_cannot_be_acquired = 'LeaseIsBreakingAndCannotBeAcquired' lease_is_breaking_and_cannot_be_changed = 'LeaseIsBreakingAndCannotBeChanged' lease_is_broken_and_cannot_be_renewed = 'LeaseIsBrokenAndCannotBeRenewed' lease_lost = 'LeaseLost' lease_name_mismatch = 'LeaseNameMismatch' lease_not_present_with_blob_operation = 'LeaseNotPresentWithBlobOperation' lease_not_present_with_container_operation = 'LeaseNotPresentWithContainerOperation' lease_not_present_with_lease_operation = 'LeaseNotPresentWithLeaseOperation' max_blob_size_condition_not_met = 'MaxBlobSizeConditionNotMet' md5_mismatch = 'Md5Mismatch' message_not_found = 'MessageNotFound' message_too_large = 'MessageTooLarge' metadata_too_large = 'MetadataTooLarge' missing_content_length_header = 'MissingContentLengthHeader' missing_required_header = 'MissingRequiredHeader' missing_required_query_parameter = 'MissingRequiredQueryParameter' missing_required_xml_node = 'MissingRequiredXmlNode' multiple_condition_headers_not_supported = 'MultipleConditionHeadersNotSupported' no_authentication_information = 'NoAuthenticationInformation' no_pending_copy_operation = 'NoPendingCopyOperation' operation_not_allowed_on_incremental_copy_blob = 'OperationNotAllowedOnIncrementalCopyBlob' operation_timed_out = 'OperationTimedOut' out_of_range_input = 'OutOfRangeInput' out_of_range_query_parameter_value = 'OutOfRangeQueryParameterValue' parent_not_found = 'ParentNotFound' path_already_exists = 'PathAlreadyExists' path_conflict = 'PathConflict' path_not_found = 'PathNotFound' pending_copy_operation = 'PendingCopyOperation' pop_receipt_mismatch = 'PopReceiptMismatch' previous_snapshot_cannot_be_newer = 'PreviousSnapshotCannotBeNewer' previous_snapshot_not_found = 'PreviousSnapshotNotFound' previous_snapshot_operation_not_supported = 'PreviousSnapshotOperationNotSupported' queue_already_exists = 'QueueAlreadyExists' queue_being_deleted = 'QueueBeingDeleted' queue_disabled = 'QueueDisabled' queue_not_empty = 'QueueNotEmpty' queue_not_found = 'QueueNotFound' read_only_attribute = 'ReadOnlyAttribute' rename_destination_parent_path_not_found = 'RenameDestinationParentPathNotFound' request_body_too_large = 'RequestBodyTooLarge' request_url_failed_to_parse = 'RequestUrlFailedToParse' resource_already_exists = 'ResourceAlreadyExists' resource_not_found = 'ResourceNotFound' resource_type_mismatch = 'ResourceTypeMismatch' sequence_number_condition_not_met = 'SequenceNumberConditionNotMet' sequence_number_increment_too_large = 'SequenceNumberIncrementTooLarge' server_busy = 'ServerBusy' share_already_exists = 'ShareAlreadyExists' share_being_deleted = 'ShareBeingDeleted' share_disabled = 'ShareDisabled' share_has_snapshots = 'ShareHasSnapshots' share_not_found = 'ShareNotFound' share_snapshot_count_exceeded = 'ShareSnapshotCountExceeded' share_snapshot_in_progress = 'ShareSnapshotInProgress' share_snapshot_operation_not_supported = 'ShareSnapshotOperationNotSupported' sharing_violation = 'SharingViolation' snaphot_operation_rate_exceeded = 'SnaphotOperationRateExceeded' snapshot_count_exceeded = 'SnapshotCountExceeded' snapshots_present = 'SnapshotsPresent' source_condition_not_met = 'SourceConditionNotMet' source_path_is_being_deleted = 'SourcePathIsBeingDeleted' source_path_not_found = 'SourcePathNotFound' system_in_use = 'SystemInUse' target_condition_not_met = 'TargetConditionNotMet' unauthorized_blob_overwrite = 'UnauthorizedBlobOverwrite' unsupported_header = 'UnsupportedHeader' unsupported_http_verb = 'UnsupportedHttpVerb' unsupported_query_parameter = 'UnsupportedQueryParameter' unsupported_rest_version = 'UnsupportedRestVersion' unsupported_xml_node = 'UnsupportedXmlNode' class azure.storage.blob.StorageStreamDownloader(clients=None, config=None, start_range=None, end_range=None, validate_content=None, encryption_options=None, max_concurrency=1, name=None, container=None, encoding=None, **kwargs)[source]

A streaming object to download from Azure Storage.

Variableschunks()Iterator[bytes][source]

Iterate over chunks in the download stream.

Return type

Iterator[bytes]

Example:

Download a blob using chunks().
# This returns a StorageStreamDownloader. stream = source_blob_client.download_blob() block_list = [] # Read data in chunks to avoid loading all into memory at once for chunk in stream.chunks(): # process your data (anything can be done here really. `chunk` is a byte array). block_id = str(uuid.uuid4()) destination_blob_client.stage_block(block_id=block_id, data=chunk) block_list.append(BlobBlock(block_id=block_id)) 
content_as_bytes(max_concurrency=1)[source]

Download the contents of this file.

This operation is blocking until all data is downloaded.

Keyword Arguments

max_concurrency (int) – The number of parallel connections with which to download.

Return type

bytes

content_as_text(max_concurrency=1, encoding='UTF-8')[source]

Download the contents of this blob, and decode as text.

This operation is blocking until all data is downloaded.

Keyword Arguments

max_concurrency (int) – The number of parallel connections with which to download.

Parameters

encoding (str) – Test encoding to decode the downloaded bytes. Default is UTF-8.

Return type

str

download_to_stream(stream, max_concurrency=1)[source]

Download the contents of this blob to a stream.

Parameters

stream – The stream to download to. This can be an open file-handle, or any writable stream. The stream must be seekable if the download uses more than one parallel connection.

Returns

The properties of the downloaded blob.

Return type

Any

readall()[source]

Download the contents of this blob.

This operation is blocking until all data is downloaded.

Return type

bytes or str

readinto(stream)[source]

Download the contents of this file to a stream.

Parameters

stream – The stream to download to. This can be an open file-handle, or any writable stream. The stream must be seekable if the download uses more than one parallel connection.

Returns

The number of bytes read.

Return type

int

class azure.storage.blob.UserDelegationKey[source]

Represents a user delegation key, provided to the user by Azure Storage based on their Azure Active Directory access token.

The fields are saved as simple strings since the user does not have to interact with this object; to generate an identify SAS, the user can simply pass it to the right API.

Variablesazure.storage.blob.download_blob_from_url(blob_url: str, output: str, credential: Optional[Any] = None, **kwargs)None[source]

Download the contents of a blob to a local file or stream.

ParametersKeyword ArgumentsReturn type

None

azure.storage.blob.generate_account_sas(account_name: str, account_key: str, resource_types: Union[ResourceTypes, str], permission: Union[AccountSasPermissions, str], expiry: Optional[Union[datetime, str]], start: Optional[Union[datetime, str]] = None, ip: Optional[str] = None, **kwargs: Any)str[source]

Generates a shared access signature for the blob service.

Use the returned signature with the credential parameter of any BlobServiceClient, ContainerClient or BlobClient.

ParametersKeyword Arguments

protocol (str) – Specifies the protocol permitted for a request made. The default value is https.

Returns

A Shared Access Signature (sas) token.

Return type

str

Example:

Generating a shared access signature.
# Create a SAS token to use to authenticate a new client from datetime import datetime, timedelta from azure.storage.blob import ResourceTypes, AccountSasPermissions, generate_account_sas sas_token = generate_account_sas( blob_service_client.account_name, account_key=blob_service_client.credential.account_key, resource_types=ResourceTypes(object=True), permission=AccountSasPermissions(read=True), expiry=datetime.utcnow() + timedelta(hours=1) ) 
azure.storage.blob.generate_blob_sas(account_name: str, container_name: str, blob_name: str, snapshot: Optional[str] = None, account_key: Optional[str] = None, user_delegation_key: Optional[UserDelegationKey] = None, permission: Optional[Union[BlobSasPermissions, str]] = None, expiry: Optional[Union[datetime, str]] = None, start: Optional[Union[datetime, str]] = None, policy_id: Optional[str] = None, ip: Optional[str] = None, **kwargs: Any)Any[source]

Generates a shared access signature for a blob.

Use the returned signature with the credential parameter of any BlobServiceClient, ContainerClient or BlobClient.

ParametersKeyword ArgumentsReturns

A Shared Access Signature (sas) token.

Return type

str

azure.storage.blob.generate_container_sas(account_name: str, container_name: str, account_key: Optional[str] = None, user_delegation_key: Optional[UserDelegationKey] = None, permission: Optional[Union[ContainerSasPermissions, str]] = None, expiry: Optional[Union[datetime, str]] = None, start: Optional[Union[datetime, str]] = None, policy_id: Optional[str] = None, ip: Optional[str] = None, **kwargs: Any)Any[source]

Generates a shared access signature for a container.

Use the returned signature with the credential parameter of any BlobServiceClient, ContainerClient or BlobClient.

ParametersKeyword ArgumentsReturns

A Shared Access Signature (sas) token.

Return type

str

Example:

Generating a sas token.
# Use access policy to generate a sas token from azure.storage.blob import generate_container_sas sas_token = generate_container_sas( container_client.account_name, container_client.container_name, account_key=container_client.credential.account_key, policy_id='my-access-policy-id' ) 
azure.storage.blob.upload_blob_to_url(blob_url: str, data: Union[Iterable[AnyStr], IO[AnyStr]], credential: Optional[Any] = None, **kwargs)Dict[str, Any][source]

Upload data to a given URL

The data will be uploaded as a block blob.

ParametersKeyword ArgumentsReturns

Blob-updated property dict (Etag and last modified)

Return type

dict(str, Any)

Subpackages

ncG1vNJzZmiZqqq%2Fpr%2FDpJuom6Njr627wWeaqKqVY8SqusOorqxmnprBcHDWnploqKmptbC6jpqxrqqVYsC1u9Ganp5lkqG8o3uQa2VyZmBkrrvB0Z5lrKyfp66osY2bo6iaXp3Brrg%3D