content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
#' Storage account resource class
#'
#' Class representing a storage account, exposing methods for working with it.
#'
#' @docType class
#' @section Methods:
#' The following methods are available, in addition to those provided by the [AzureRMR::az_resource] class:
#' - `new(...)`: Initialize a new storage object. See 'Initialization'.
#' - `list_keys()`: Return the access keys for this account.
#' - `get_account_sas(...)`: Return an account shared access signature (SAS). See 'Creating a shared access signature' below.
#' - `get_user_delegation_key(...)`: Returns a key that can be used to construct a user delegation SAS.
#' - `get_user_delegation_sas(...)`: Return a user delegation SAS.
#' - `revoke_user_delegation_keys()`: Revokes all user delegation keys for the account. This also renders all SAS's obtained via such keys invalid.
#' - `get_blob_endpoint(key, sas)`: Return the account's blob storage endpoint, along with an access key and/or a SAS. See 'Endpoints' for more details
#' - `get_file_endpoint(key, sas)`: Return the account's file storage endpoint.
#' - `regen_key(key)`: Regenerates (creates a new value for) an access key. The argument `key` can be 1 or 2.
#'
#' @section Initialization:
#' Initializing a new object of this class can either retrieve an existing storage account, or create a account on the host. Generally, the best way to initialize an object is via the `get_storage_account`, `create_storage_account` or `list_storage_accounts` methods of the [az_resource_group] class, which handle the details automatically.
#'
#' @section Creating a shared access signature:
#' Note that you don't need to worry about this section if you have been _given_ a SAS, and only want to use it to access storage.
#'
#' AzureStor supports generating three kinds of SAS: account, service and user delegation. An account SAS can be used with any type of storage. A service SAS can be used with blob and file storage, whle a user delegation SAS can be used with blob and ADLS2 storage.
#'
#' To create an account SAS, call the `get_account_sas()` method. This has the following signature:
#'
#' ```
#' get_account_sas(key=self$list_keys()[1], start=NULL, expiry=NULL, services="bqtf", permissions="rl",
#' resource_types="sco", ip=NULL, protocol=NULL)
#' ```
#'
#' To create a service SAS, call the `get_service_sas()` method, which has the following signature:
#'
#' ```
#' get_service_sas(key=self$list_keys()[1], resource, service, start=NULL, expiry=NULL, permissions="r",
#' resource_type=NULL, ip=NULL, protocol=NULL, policy=NULL, snapshot_time=NULL)
#' ```
#'
#' To create a user delegation SAS, you must first create a user delegation _key_. This takes the place of the account's access key in generating the SAS. The `get_user_delegation_key()` method has the following signature:
#'
#' ```
#' get_user_delegation_key(token=self$token, key_start=NULL, key_expiry=NULL)
#' ```
#'
#' Once you have a user delegation key, you can use it to obtain a user delegation sas. The `get_user_delegation_sas()` method has the following signature:
#'
#' ```
#' get_user_delegation_sas(key, resource, start=NULL, expiry=NULL, permissions="rl",
#' resource_type="c", ip=NULL, protocol=NULL, snapshot_time=NULL)
#' ```
#'
#' (Note that the `key` argument for this method is the user delegation key, _not_ the account key.)
#'
#' To invalidate all user delegation keys, as well as the SAS's generated with them, call the `revoke_user_delegation_keys()` method. This has the following signature:
#'
#' ```
#' revoke_user_delegation_keys()
#' ```
#'
#' See the [Shared access signatures][sas] page for more information about this topic.
#'
#' @section Endpoints:
#' The client-side interaction with a storage account is via an _endpoint_. A storage account can have several endpoints, one for each type of storage supported: blob, file, queue and table.
#'
#' The client-side interface in AzureStor is implemented using S3 classes. This is for consistency with other data access packages in R, which mostly use S3. It also emphasises the distinction between Resource Manager (which is for interacting with the storage account itself) and the client (which is for accessing files and data stored in the account).
#'
#' To create a storage endpoint independently of Resource Manager (for example if you are a user without admin or owner access to the account), use the [blob_endpoint] or [file_endpoint] functions.
#'
#' If a storage endpoint is created without an access key and SAS, only public (anonymous) access is possible.
#'
#' @seealso
#' [blob_endpoint], [file_endpoint],
#' [create_storage_account], [get_storage_account], [delete_storage_account], [Date], [POSIXt]
#'
#' [Azure Storage Provider API reference](https://docs.microsoft.com/en-us/rest/api/storagerp/),
#' [Azure Storage Services API reference](https://docs.microsoft.com/en-us/rest/api/storageservices/)
#'
#' [Create an account SAS](https://docs.microsoft.com/en-us/rest/api/storageservices/create-account-sas),
#' [Create a user delegation SAS](https://docs.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas),
#' [Create a service SAS](https://docs.microsoft.com/en-us/rest/api/storageservices/create-service-sas)
#'
#' @examples
#' \dontrun{
#'
#' # recommended way of retrieving a resource: via a resource group object
#' stor <- resgroup$get_storage_account("mystorage")
#'
#' # list account access keys
#' stor$list_keys()
#'
#' # regenerate a key
#' stor$regen_key(1)
#'
#' # storage endpoints
#' stor$get_blob_endpoint()
#' stor$get_file_endpoint()
#'
#' }
#' @export
az_storage <- R6::R6Class("az_storage", inherit=AzureRMR::az_resource,
public=list(
list_keys=function()
{
keys <- named_list(private$res_op("listKeys", http_verb="POST")$keys, "keyName")
sapply(keys, `[[`, "value")
},
get_account_sas=function(key=self$list_keys()[1], start=NULL, expiry=NULL, services="bqtf", permissions="rl",
resource_types="sco", ip=NULL, protocol=NULL)
{
get_account_sas(self, key=key, start=start, expiry=expiry, services=services, permissions=permissions,
resource_types=resource_types, ip=ip, protocol=protocol)
},
get_user_delegation_key=function(token=self$token, key_start=NULL, key_expiry=NULL)
{
get_user_delegation_key(self, token=token, key_start=key_start, key_expiry=key_expiry)
},
revoke_user_delegation_keys=function()
{
revoke_user_delegation_keys(self)
},
get_user_delegation_sas=function(key, resource, start=NULL, expiry=NULL, permissions="rl",
resource_type="c", ip=NULL, protocol=NULL, snapshot_time=NULL)
{
get_user_delegation_sas(self, key=key, resource=resource, start=start, expiry=expiry, permissions=permissions,
resource_type=resource_type, ip=ip, protocol=protocol, snapshot_time=snapshot_time)
},
get_service_sas=function(key=self$list_keys()[1], resource, service, start=NULL, expiry=NULL, permissions="r",
resource_type=NULL, ip=NULL, protocol=NULL, policy=NULL, snapshot_time=NULL, directory_depth=NULL)
{
get_service_sas(self, key=key, resource=resource, service=service, start=start, expiry=expiry,
permissions=permissions, resource_type=resource_type, ip=ip, protocol=protocol, policy=policy,
snapshot_time=snapshot_time, directory_depth=directory_depth)
},
get_blob_endpoint=function(key=self$list_keys()[1], sas=NULL, token=NULL)
{
blob_endpoint(self$properties$primaryEndpoints$blob, key=key, sas=sas, token=token)
},
get_file_endpoint=function(key=self$list_keys()[1], sas=NULL, token=NULL)
{
file_endpoint(self$properties$primaryEndpoints$file, key=key, sas=sas, token=token)
},
get_adls_endpoint=function(key=self$list_keys()[1], sas=NULL, token=NULL)
{
adls_endpoint(self$properties$primaryEndpoints$dfs, key=key, sas=sas, token=token)
},
regen_key=function(key=1)
{
body <- list(keyName=paste0("key", key))
keys <- self$do_operation("regenerateKey", body=body, encode="json", http_verb="POST")
keys <- named_list(keys$keys, "keyName")
sapply(keys, `[[`, "value")
},
print=function(...)
{
cat("<Azure resource ", self$type, "/", self$name, ">\n", sep="")
endp <- self$properties$primaryEndpoints
endp <- paste0(" ", names(endp), ": ", endp, collapse="\n")
sku <- unlist(self$sku)
cat(" Account type:", self$kind, "\n")
cat(" SKU:", paste0(names(sku), "=", sku, collapse=", "), "\n")
cat(" Endpoints:\n")
cat(endp, "\n")
cat("---\n")
cat(AzureRMR::format_public_fields(self, exclude=c("subscription", "resource_group",
"type", "name", "kind", "sku")))
cat(AzureRMR::format_public_methods(self))
invisible(NULL)
}
))
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/az_storage.R |
#' Call the azcopy file transfer utility
#'
#' @param ... Arguments to pass to AzCopy on the commandline. If no arguments are supplied, a help screen is printed.
#' @param env A named character vector of environment variables to set for AzCopy.
#' @param silent Whether to print the output from AzCopy to the screen; also sets whether an error return code from AzCopy will be propagated to an R error. Defaults to the value of the `azure_storage_azcopy_silent` option, or FALSE if this is unset.
#'
#' @details
#' AzureStor has the ability to use the Microsoft AzCopy commandline utility to transfer files. To enable this, ensure the processx package is installed and set the argument `use_azcopy=TRUE` in any call to an upload or download function; AzureStor will then call AzCopy to perform the file transfer rather than relying on its own code. You can also call AzCopy directly with the `call_azcopy` function.
#'
#' AzureStor requires version 10 or later of AzCopy. The first time you try to run it, AzureStor will check that the version of AzCopy is correct, and throw an error if it is version 8 or earlier.
#'
#' The AzCopy utility must be in your path for AzureStor to find it. Note that unlike earlier versions, Azcopy 10 is a single, self-contained binary file that can be placed in any directory.
#'
#' @return
#' A list, invisibly, with the following components:
#' - `status`: The exit status of the AzCopy command. If this is NA, then the process was killed and had no exit status.
#' - `stdout`: The standard output of the command.
#' - `stderr`: The standard error of the command.
#' - `timeout`: Whether AzCopy was killed because of a timeout.
#' @seealso
#' [processx::run], [download_blob], [download_azure_file], [download_adls_file]
#'
#' [AzCopy page on Microsoft Docs](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10)
#'
#' [AzCopy GitHub repo](https://github.com/Azure/azure-storage-azcopy)
#' @examples
#' \dontrun{
#'
#' endp <- storage_endpoint("https://mystorage.blob.core.windows.net", sas="mysas")
#' cont <- storage_container(endp, "mycontainer")
#'
#' # print various help screens
#' call_azcopy("help")
#' call_azcopy("help", "copy")
#'
#' # calling azcopy to download a blob
#' storage_download(cont, "myblob.csv", use_azcopy=TRUE)
#'
#' # calling azcopy directly (must specify the SAS explicitly in the source URL)
#' call_azcopy("copy",
#' "https://mystorage.blob.core.windows.net/mycontainer/myblob.csv?mysas",
#' "myblob.csv")
#'
#' }
#' @aliases azcopy
#' @rdname azcopy
#' @export
call_azcopy <- function(..., env=NULL, silent=getOption("azure_storage_azcopy_silent", FALSE))
{
silent <- as.logical(silent)
args <- as.character(unlist(list(...)))
invisible(processx::run(get_azcopy_path(), args, env=env, echo_cmd=!silent, echo=!silent, error_on_status=!silent))
}
call_azcopy_from_storage <- function(object, ...)
{
if(!requireNamespace("processx", quietly=TRUE))
stop("The processx package must be installed to use azcopy", call.=FALSE)
auth <- azcopy_auth(object)
if(auth$login)
on.exit(call_azcopy("logout", silent=TRUE))
invisible(call_azcopy(..., env=auth$env))
}
azcopy_upload <- function(container, src, dest, ...)
{
opts <- azcopy_upload_opts(container, ...)
dest_uri <- httr::parse_url(container$endpoint$url)
dest_uri$path <- gsub("//", "/", file.path(container$name, dest))
dest <- azcopy_add_sas(container$endpoint, httr::build_url(dest_uri))
call_azcopy_from_storage(container$endpoint, "copy", src, dest, opts)
}
azcopy_upload_opts <- function(container, ...)
{
UseMethod("azcopy_upload_opts")
}
azcopy_upload_opts.blob_container <- function(container, type="BlockBlob", blocksize=2^24, recursive=FALSE,
lease=NULL, put_md5=FALSE, ...)
{
if(!is.null(lease))
warning("azcopy does not support blob leasing at this time", call.=FALSE)
c("--blob-type", type, "--block-size-mb", sprintf("%.0f", blocksize/1048576), if(recursive) "--recursive",
if(put_md5) "--put-md5")
}
azcopy_upload_opts.file_share <- function(container, blocksize=2^22, recursive=FALSE, put_md5=FALSE, ...)
{
c("--block-size-mb", sprintf("%.0f", blocksize/1048576), if(recursive) "--recursive",
if(put_md5) "--put-md5")
}
azcopy_upload_opts.adls_filesystem <- function(container, blocksize=2^24, recursive=FALSE, lease=NULL,
put_md5=FALSE, ...)
{
if(!is.null(lease))
warning("azcopy does not support blob leasing at this time", call.=FALSE)
c("--block-size-mb", sprintf("%.0f", blocksize/1048576), if(recursive) "--recursive",
if(put_md5) "--put-md5")
}
azcopy_download <- function(container, src, dest, ...)
{
opts <- azcopy_download_opts(container, ...)
src_uri <- httr::parse_url(container$endpoint$url)
src_uri$path <- gsub("//", "/", file.path(container$name, src))
src <- azcopy_add_sas(container$endpoint, httr::build_url(src_uri))
call_azcopy_from_storage(container$endpoint, "copy", src, dest, opts)
}
azcopy_download_opts <- function(container, ...)
{
UseMethod("azcopy_download_opts")
}
# currently all azcopy_download_opts methods are the same
azcopy_download_opts.blob_container <- function(container, overwrite=FALSE, recursive=FALSE, check_md5=FALSE, ...)
{
c(paste0("--overwrite=", tolower(as.character(overwrite))), if(recursive) "--recursive",
if(check_md5) c("--check-md5", "FailIfDifferent"))
}
azcopy_download_opts.file_share <- function(container, overwrite=FALSE, recursive=FALSE, check_md5=FALSE, ...)
{
c(paste0("--overwrite=", tolower(as.character(overwrite))), if(recursive) "--recursive",
if(check_md5) c("--check-md5", "FailIfDifferent"))
}
azcopy_download_opts.adls_filesystem <- function(container, overwrite=FALSE, recursive=FALSE, check_md5=FALSE, ...)
{
c(paste0("--overwrite=", tolower(as.character(overwrite))), if(recursive) "--recursive",
if(check_md5) c("--check-md5", "FailIfDifferent"))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/azcopy.R |
# multiple code paths for authenticating
# key: set AZCOPY_ACCOUNT_NAME and AZCOPY_ACCOUNT_KEY envvars
# sas: append sas to URL (handled separately)
# token:
# - client creds: run azcopy login, pass client secret in AZCOPY_SPA_CLIENT_SECRET envvar
# - auth code: set AZCOPY_OAUTH_TOKEN_INFO envvar
# managed: run azcopy login --identity
azcopy_auth <- function(endpoint)
{
env <- character(0)
obj <- list(login=FALSE)
if(!is.null(endpoint$key))
{
stop("AzCopy does not support authentication with a shared key", call.=FALSE)
# env["ACCOUNT_NAME"] <- sub("\\..*$", "", httr::parse_url(endpoint$url)$hostname)
# env["ACCOUNT_KEY"] <- unname(endpoint$key)
}
else if(!is.null(endpoint$token))
{
token <- endpoint$token
if(inherits(token, "AzureTokenClientCreds"))
{
obj$login <- TRUE
env["AZCOPY_SPA_CLIENT_SECRET"] <- token$client$client_secret
args <- c("login", "--service-principal", "--tenant-id", token$tenant,
"--application-id", token$client$client_id)
call_azcopy(args, env=env, silent=TRUE)
}
else if(inherits(token, c("AzureTokenAuthCode", "AzureTokenDeviceCode")))
{
creds <- list(
access_token=token$credentials$access_token,
refresh_token=token$credentials$refresh_token,
expires_in=token$credentials$expires_in,
expires_on=token$credentials$expires_on,
not_before=token$credentials$not_before,
resource=token$credentials$resource,
token_type=token$credentials$token_type,
scope=token$credentials$scope,
`_tenant`=token$tenant,
`_ad_endpoint`=token$aad_host,
`_client_id`=token$client$client_id
)
env["AZCOPY_OAUTH_TOKEN_INFO"] <- jsonlite::toJSON(creds[!sapply(creds, is.null)], auto_unbox=TRUE)
}
else if(inherits(token, "AzureTokenManaged"))
{
obj$login <- TRUE
call_azcopy(c("login", "--identity"), env, silent=TRUE)
}
else stop(
"Only client_credentials, authorization_code, device_code and managed_identity flows supported for azcopy",
call.=FALSE
)
}
obj$env <- env
obj
}
azcopy_add_sas <- function(endpoint, url)
{
if(!is.null(endpoint$sas))
url <- paste0(url, "?", sub("^\\?", "", endpoint$sas))
url
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/azcopy_auth.R |
# azcopy unset/NULL -> not initialized
# azcopy = NA -> binary not found, or version < 10 (not usable)
# azcopy = path -> usable
get_azcopy_path <- function()
{
if(exists("azcopy", envir=.AzureStor, inherits=FALSE))
{
if(!is.na(.AzureStor$azcopy))
return(.AzureStor$azcopy)
else stop("azcopy version 10+ required but not found", call.=FALSE)
}
else
{
set_azcopy_path()
Recall()
}
}
set_azcopy_path <- function(path="azcopy")
{
path <- Sys.which(path)
if(is.na(path) || path == "")
{
.AzureStor$azcopy <- NA
return(NULL)
}
ver <- suppressWarnings(processx::run(path, "--version"))
if(!grepl("version 1[[:digit:]]", ver$stdout, ignore.case=TRUE))
{
.AzureStor$azcopy <- NA
return(NULL)
}
.AzureStor$azcopy <- unname(path)
message("Using azcopy binary ", path)
invisible(NULL)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/azcopy_path.R |
#' Operations on a blob endpoint
#'
#' Get, list, create, or delete blob containers.
#'
#' @param endpoint Either a blob endpoint object as created by [storage_endpoint], or a character string giving the URL of the endpoint.
#' @param key,token,sas If an endpoint object is not supplied, authentication credentials: either an access key, an Azure Active Directory (AAD) token, or a SAS, in that order of priority. If no authentication credentials are provided, only public (anonymous) access to the share is possible.
#' @param api_version If an endpoint object is not supplied, the storage API version to use when interacting with the host. Currently defaults to `"2019-07-07"`.
#' @param name The name of the blob container to get, create, or delete.
#' @param confirm For deleting a container, whether to ask for confirmation.
#' @param lease For deleting a leased container, the lease ID.
#' @param public_access For creating a container, the level of public access to allow.
#' @param x For the print method, a blob container object.
#' @param ... Further arguments passed to lower-level functions.
#'
#' @details
#' You can call these functions in a couple of ways: by passing the full URL of the share, or by passing the endpoint object and the name of the container as a string.
#'
#' If authenticating via AAD, you can supply the token either as a string, or as an object of class AzureToken, created via [AzureRMR::get_azure_token]. The latter is the recommended way of doing it, as it allows for automatic refreshing of expired tokens.
#'
#' @return
#' For `blob_container` and `create_blob_container`, an S3 object representing an existing or created container respectively.
#'
#' For `list_blob_containers`, a list of such objects.
#'
#' @seealso
#' [storage_endpoint], [az_storage], [storage_container]
#'
#' @examples
#' \dontrun{
#'
#' endp <- blob_endpoint("https://mystorage.blob.core.windows.net/", key="access_key")
#'
#' # list containers
#' list_blob_containers(endp)
#'
#' # get, create, and delete a container
#' blob_container(endp, "mycontainer")
#' create_blob_container(endp, "newcontainer")
#' delete_blob_container(endp, "newcontainer")
#'
#' # alternative way to do the same
#' blob_container("https://mystorage.blob.core.windows.net/mycontainer", key="access_key")
#' create_blob_container("https://mystorage.blob.core.windows.net/newcontainer", key="access_key")
#' delete_blob_container("https://mystorage.blob.core.windows.net/newcontainer", key="access_key")
#'
#' # authenticating via AAD
#' token <- AzureRMR::get_azure_token(resource="https://storage.azure.com/",
#' tenant="myaadtenant",
#' app="myappid",
#' password="mypassword")
#' blob_container("https://mystorage.blob.core.windows.net/mycontainer", token=token)
#'
#' }
#' @rdname blob_container
#' @export
blob_container <- function(endpoint, ...)
{
UseMethod("blob_container")
}
#' @rdname blob_container
#' @export
blob_container.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
do.call(blob_container, generate_endpoint_container(endpoint, key, token, sas, api_version))
}
#' @rdname blob_container
#' @export
blob_container.blob_endpoint <- function(endpoint, name, ...)
{
obj <- list(name=name, endpoint=endpoint)
class(obj) <- c("blob_container", "storage_container")
obj
}
#' @rdname blob_container
#' @export
print.blob_container <- function(x, ...)
{
cat("Azure blob container '", x$name, "'\n", sep="")
url <- httr::parse_url(x$endpoint$url)
url$path <- x$name
cat(sprintf("URL: %s\n", httr::build_url(url)))
if(!is_empty(x$endpoint$key))
cat("Access key: <hidden>\n")
else cat("Access key: <none supplied>\n")
if(!is_empty(x$endpoint$token))
{
cat("Azure Active Directory access token:\n")
print(x$endpoint$token)
}
else cat("Azure Active Directory access token: <none supplied>\n")
if(!is_empty(x$endpoint$sas))
cat("Account shared access signature: <hidden>\n")
else cat("Account shared access signature: <none supplied>\n")
cat(sprintf("Storage API version: %s\n", x$endpoint$api_version))
invisible(x)
}
#' @rdname blob_container
#' @export
list_blob_containers <- function(endpoint, ...)
{
UseMethod("list_blob_containers")
}
#' @rdname blob_container
#' @export
list_blob_containers.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
do.call(list_blob_containers, generate_endpoint_container(endpoint, key, token, sas, api_version))
}
#' @rdname blob_container
#' @export
list_blob_containers.blob_endpoint <- function(endpoint, ...)
{
res <- call_storage_endpoint(endpoint, "/", options=list(comp="list"))
lst <- lapply(res$Containers, function(cont) blob_container(endpoint, cont$Name[[1]]))
while(length(res$NextMarker) > 0)
{
res <- call_storage_endpoint(endpoint, "/", options=list(comp="list", marker=res$NextMarker[[1]]))
lst <- c(lst, lapply(res$Containers, function(cont) blob_container(endpoint, cont$Name[[1]])))
}
named_list(lst)
}
#' @rdname blob_container
#' @export
create_blob_container <- function(endpoint, ...)
{
UseMethod("create_blob_container")
}
#' @rdname blob_container
#' @export
create_blob_container.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
endp <- generate_endpoint_container(endpoint, key, token, sas, api_version)
create_blob_container(endp$endpoint, endp$name, ...)
}
#' @rdname blob_container
#' @export
create_blob_container.blob_container <- function(endpoint, ...)
{
create_blob_container(endpoint$endpoint, endpoint$name)
}
#' @rdname blob_container
#' @export
create_blob_container.blob_endpoint <- function(endpoint, name, public_access=c("none", "blob", "container"), ...)
{
public_access <- match.arg(public_access)
headers <- if(public_access != "none")
modifyList(list(...), list("x-ms-blob-public-access"=public_access))
else list(...)
obj <- blob_container(endpoint, name)
do_container_op(obj, options=list(restype="container"), headers=headers, http_verb="PUT")
obj
}
#' @rdname blob_container
#' @export
delete_blob_container <- function(endpoint, ...)
{
UseMethod("delete_blob_container")
}
#' @rdname blob_container
#' @export
delete_blob_container.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
endp <- generate_endpoint_container(endpoint, key, token, sas, api_version)
delete_blob_container(endp$endpoint, endp$name, ...)
}
#' @rdname blob_container
#' @export
delete_blob_container.blob_container <- function(endpoint, ...)
{
delete_blob_container(endpoint$endpoint, endpoint$name, ...)
}
#' @rdname blob_container
#' @export
delete_blob_container.blob_endpoint <- function(endpoint, name, confirm=TRUE, lease=NULL, ...)
{
if(!delete_confirmed(confirm, paste0(endpoint$url, name), "container"))
return(invisible(NULL))
headers <- if(!is_empty(lease))
list("x-ms-lease-id"=lease)
else list()
obj <- blob_container(endpoint, name)
invisible(do_container_op(obj, options=list(restype="container"), headers=headers, http_verb="DELETE"))
}
#' Operations on a blob container or blob
#'
#' Upload, download, or delete a blob; list blobs in a container; create or delete directories; check blob availability.
#'
#' @param container A blob container object.
#' @param blob A string naming a blob.
#' @param dir For `list_blobs`, a string naming the directory. Note that blob storage does not support real directories; this argument simply filters the result to return only blobs whose names start with the given value.
#' @param src,dest The source and destination files for uploading and downloading. See 'Details' below.
#' @param info For `list_blobs`, level of detail about each blob to return: a vector of names only; the name, size, blob type, and whether this blob represents a directory; or all information.
#' @param confirm Whether to ask for confirmation on deleting a blob.
#' @param blocksize The number of bytes to upload/download per HTTP(S) request.
#' @param lease The lease for a blob, if present.
#' @param type When uploading, the type of blob to create. Currently only block and append blobs are supported.
#' @param append When uploading, whether to append the uploaded data to the destination blob. Only has an effect if `type="AppendBlob"`. If this is FALSE (the default) and the destination append blob exists, it is overwritten. If this is TRUE and the destination does not exist or is not an append blob, an error is thrown.
#' @param overwrite When downloading, whether to overwrite an existing destination file.
#' @param use_azcopy Whether to use the AzCopy utility from Microsoft to do the transfer, rather than doing it in R.
#' @param max_concurrent_transfers For `multiupload_blob` and `multidownload_blob`, the maximum number of concurrent file transfers. Each concurrent file transfer requires a separate R process, so limit this if you are low on memory.
#' @param prefix For `list_blobs`, an alternative way to specify the directory.
#' @param recursive For the multiupload/download functions, whether to recursively transfer files in subdirectories. For `list_blobs`, whether to include the contents of any subdirectories in the listing. For `delete_blob_dir`, whether to recursively delete subdirectory contents as well.
#' @param put_md5 For uploading, whether to compute the MD5 hash of the blob(s). This will be stored as part of the blob's properties. Only used for block blobs.
#' @param check_md5 For downloading, whether to verify the MD5 hash of the downloaded blob(s). This requires that the blob's `Content-MD5` property is set. If this is TRUE and the `Content-MD5` property is missing, a warning is generated.
#' @param snapshot,version For `download_blob`, optional snapshot and version identifiers. These should be datetime strings, in the format "yyyy-mm-ddTHH:MM:SS.SSSSSSSZ". If omitted, download the base blob.
#'
#' @details
#' `upload_blob` and `download_blob` are the workhorse file transfer functions for blobs. They each take as inputs a _single_ filename as the source for uploading/downloading, and a single filename as the destination. Alternatively, for uploading, `src` can be a [textConnection] or [rawConnection] object; and for downloading, `dest` can be NULL or a `rawConnection` object. If `dest` is NULL, the downloaded data is returned as a raw vector, and if a raw connection, it will be placed into the connection. See the examples below.
#'
#' `multiupload_blob` and `multidownload_blob` are functions for uploading and downloading _multiple_ files at once. They parallelise file transfers by using the background process pool provided by AzureRMR, which can lead to significant efficiency gains when transferring many small files. There are two ways to specify the source and destination for these functions:
#' - Both `src` and `dest` can be vectors naming the individual source and destination pathnames.
#' - The `src` argument can be a wildcard pattern expanding to one or more files, with `dest` naming a destination directory. In this case, if `recursive` is true, the file transfer will replicate the source directory structure at the destination.
#'
#' `upload_blob` and `download_blob` can display a progress bar to track the file transfer. You can control whether to display this with `options(azure_storage_progress_bar=TRUE|FALSE)`; the default is TRUE.
#'
#' `multiupload_blob` can upload files either as all block blobs or all append blobs, but not a mix of both.
#'
#' `blob_exists` and `blob_dir_exists` test for the existence of a blob and directory, respectively.
#'
#' `delete_blob` deletes a blob, and `delete_blob_dir` deletes all blobs in a directory (possibly recursively). This will also delete any snapshots for the blob(s) involved.
#'
#' ## AzCopy
#' `upload_blob` and `download_blob` have the ability to use the AzCopy commandline utility to transfer files, instead of native R code. This can be useful if you want to take advantage of AzCopy's logging and recovery features; it may also be faster in the case of transferring a very large number of small files. To enable this, set the `use_azcopy` argument to TRUE.
#'
#' The following points should be noted about AzCopy:
#' - It only supports SAS and AAD (OAuth) token as authentication methods. AzCopy also expects a single filename or wildcard spec as its source/destination argument, not a vector of filenames or a connection.
#' - Currently, it does _not_ support appending data to existing blobs.
#'
#' ## Directories
#' Blob storage does not have true directories, instead using filenames containing a separator character (typically '/') to mimic a directory structure. This has some consequences:
#'
#' - The `isdir` column in the data frame output of `list_blobs` is a best guess as to whether an object represents a file or directory, and may not always be correct. Currently, `list_blobs` assumes that any object with a file size of zero is a directory.
#' - Zero-length files can cause problems for the blob storage service as a whole (not just AzureStor). Try to avoid uploading such files.
#' - `create_blob_dir` and `delete_blob_dir` are guaranteed to function as expected only for accounts with hierarchical namespaces enabled. When this feature is disabled, directories do not exist as objects in their own right: to create a directory, simply upload a blob to that directory. To delete a directory, delete all the blobs within it; as far as the blob storage service is concerned, the directory then no longer exists.
#' - Similarly, the output of `list_blobs(recursive=TRUE)` can vary based on whether the storage account has hierarchical namespaces enabled.
#' - `blob_exists` will return FALSE for a directory when the storage account does not have hierarchical namespaces enabled.
#'
#' @return
#' For `list_blobs`, details on the blobs in the container. For `download_blob`, if `dest=NULL`, the contents of the downloaded blob as a raw vector. For `blob_exists` a flag whether the blob exists.
#'
#' @seealso
#' [blob_container], [az_storage], [storage_download], [call_azcopy], [list_blob_snapshots], [list_blob_versions]
#'
#' [AzCopy version 10 on GitHub](https://github.com/Azure/azure-storage-azcopy)
#' [Guide to the different blob types](https://docs.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs)
#'
#' @examples
#' \dontrun{
#'
#' cont <- blob_container("https://mystorage.blob.core.windows.net/mycontainer", key="access_key")
#'
#' list_blobs(cont)
#'
#' upload_blob(cont, "~/bigfile.zip", dest="bigfile.zip")
#' download_blob(cont, "bigfile.zip", dest="~/bigfile_downloaded.zip")
#'
#' delete_blob(cont, "bigfile.zip")
#'
#' # uploading/downloading multiple files at once
#' multiupload_blob(cont, "/data/logfiles/*.zip", "/uploaded_data")
#' multiupload_blob(cont, "myproj/*") # no dest directory uploads to root
#' multidownload_blob(cont, "jan*.*", "/data/january")
#'
#' # append blob: concatenating multiple files into one
#' upload_blob(cont, "logfile1", "logfile", type="AppendBlob", append=FALSE)
#' upload_blob(cont, "logfile2", "logfile", type="AppendBlob", append=TRUE)
#' upload_blob(cont, "logfile3", "logfile", type="AppendBlob", append=TRUE)
#'
#' # you can also pass a vector of file/pathnames as the source and destination
#' src <- c("file1.csv", "file2.csv", "file3.csv")
#' dest <- paste0("uploaded_", src)
#' multiupload_blob(cont, src, dest)
#'
#' # uploading serialized R objects via connections
#' json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE)
#' con <- textConnection(json)
#' upload_blob(cont, con, "iris.json")
#'
#' rds <- serialize(iris, NULL)
#' con <- rawConnection(rds)
#' upload_blob(cont, con, "iris.rds")
#'
#' # downloading files into memory: as a raw vector, and via a connection
#' rawvec <- download_blob(cont, "iris.json", NULL)
#' rawToChar(rawvec)
#'
#' con <- rawConnection(raw(0), "r+")
#' download_blob(cont, "iris.rds", con)
#' unserialize(con)
#'
#' # copy from a public URL: Iris data from UCI machine learning repository
#' copy_url_to_blob(cont,
#' "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
#' "iris.csv")
#'
#' }
#' @rdname blob
#' @export
list_blobs <- function(container, dir="/", info=c("partial", "name", "all"),
prefix=NULL, recursive=TRUE)
{
info <- match.arg(info)
opts <- list(comp="list", restype="container")
# ensure last char is always '/', to get list of blobs in a subdir
if(dir != "/")
{
if(!grepl("/$", dir))
dir <- paste0(dir, "/")
prefix <- dir
}
if(!is_empty(prefix))
opts <- c(opts, prefix=as.character(prefix))
if(!recursive)
opts <- c(opts, delimiter="/")
res <- do_container_op(container, options=opts)
lst <- res$Blobs
while(length(res$NextMarker) > 0)
{
opts$marker <- res$NextMarker[[1]]
res <- do_container_op(container, options=opts)
lst <- c(lst, res$Blobs)
}
if(info != "name")
{
prefixes <- lst[names(lst) == "BlobPrefix"]
blobs <- lst[names(lst) == "Blob"]
prefix_rows <- lapply(prefixes, function(prefix)
{
structure(list(Type="BlobPrefix", Name=unlist(prefix$Name), `Content-Length`=NA, BlobType=NA),
class="data.frame", row.names=c(NA_integer_, -1L))
})
blob_rows <- lapply(blobs, function(blob)
{
structure(c(Type="Blob", Name=blob$Name, unlist(blob$Properties)),
class="data.frame", row.names=c(NA_integer_, -1L))
})
df_prefixes <- do.call(vctrs::vec_rbind, prefix_rows)
df_blobs <- do.call(vctrs::vec_rbind, blob_rows)
no_prefixes <- nrow(df_prefixes) == 0
no_blobs <- nrow(df_blobs) == 0
if(no_prefixes && no_blobs)
return(data.frame())
else if(no_prefixes)
df <- df_blobs
else if(no_blobs)
df <- df_prefixes
else df <- vctrs::vec_rbind(df_prefixes, df_blobs)
if(length(df) > 0)
{
# reorder and rename first 2 columns for consistency with ADLS, file
ndf <- names(df)
namecol <- which(ndf == "Name")
sizecol <- which(ndf == "Content-Length")
typecol <- which(names(df) == "BlobType")
names(df)[c(namecol, sizecol, typecol)] <- c("name", "size", "blobtype")
df$size <- if(!is.null(df$size)) as.numeric(df$size) else NA
df$size[df$size == 0] <- NA
df$isdir <- is.na(df$size)
dircol <- which(names(df) == "isdir")
if(info == "all")
{
if(!is.null(df$`Last-Modified`))
df$`Last-Modified` <- as_datetime(df$`Last-Modified`)
if(!is.null(df$`Creation-Time`))
df$`Creation-Time` <- as_datetime(df$`Creation-Time`)
vctrs::vec_cbind(df[c(namecol, sizecol, dircol, typecol)], df[-c(namecol, sizecol, dircol, typecol)])
}
else df[c(namecol, sizecol, dircol, typecol)]
}
else data.frame()
}
else unname(vapply(lst, function(b) b$Name[[1]], FUN.VALUE=character(1)))
}
#' @rdname blob
#' @export
upload_blob <- function(container, src, dest=basename(src), type=c("BlockBlob", "AppendBlob"),
blocksize=if(type == "BlockBlob") 2^24 else 2^22,
lease=NULL, put_md5=FALSE, append=FALSE, use_azcopy=FALSE)
{
type <- match.arg(type)
if(use_azcopy)
azcopy_upload(container, src, dest, type=type, blocksize=blocksize, lease=lease, put_md5=put_md5)
else upload_blob_internal(container, src, dest, type=type, blocksize=blocksize, lease=lease,
put_md5=put_md5, append=append)
}
#' @rdname blob
#' @export
multiupload_blob <- function(container, src, dest, recursive=FALSE, type=c("BlockBlob", "AppendBlob"),
blocksize=if(type == "BlockBlob") 2^24 else 2^22,
lease=NULL, put_md5=FALSE, append=FALSE, use_azcopy=FALSE,
max_concurrent_transfers=10)
{
type <- match.arg(type)
if(use_azcopy)
return(azcopy_upload(container, src, dest, type=type, blocksize=blocksize, lease=lease, put_md5=put_md5,
recursive=recursive))
multiupload_internal(container, src, dest, recursive=recursive, type=type, blocksize=blocksize, lease=lease,
put_md5=put_md5, append=append, max_concurrent_transfers=max_concurrent_transfers)
}
#' @rdname blob
#' @export
download_blob <- function(container, src, dest=basename(src), blocksize=2^24, overwrite=FALSE, lease=NULL,
check_md5=FALSE, use_azcopy=FALSE, snapshot=NULL, version=NULL)
{
if(use_azcopy)
azcopy_download(container, src, dest, overwrite=overwrite, lease=lease, check_md5=check_md5)
else download_blob_internal(container, src, dest, blocksize=blocksize, overwrite=overwrite, lease=lease,
check_md5=check_md5, snapshot=snapshot, version=version)
}
#' @rdname blob
#' @export
multidownload_blob <- function(container, src, dest, recursive=FALSE, blocksize=2^24, overwrite=FALSE, lease=NULL,
check_md5=FALSE, use_azcopy=FALSE,
max_concurrent_transfers=10)
{
if(use_azcopy)
return(azcopy_download(container, src, dest, overwrite=overwrite, lease=lease, recursive=recursive,
check_md5=check_md5))
multidownload_internal(container, src, dest, recursive=recursive, blocksize=blocksize, overwrite=overwrite,
lease=lease, check_md5=check_md5, max_concurrent_transfers=max_concurrent_transfers)
}
#' @rdname blob
#' @export
delete_blob <- function(container, blob, confirm=TRUE)
{
if(!delete_confirmed(confirm, paste0(container$endpoint$url, container$name, "/", blob), "blob"))
return(invisible(NULL))
# deleting zero-length blobs (directories) will fail if the x-ms-delete-snapshots header is present
# and this is a HNS-enabled account:
# since there is no way to detect whether the account is HNS, and getting the blob size requires
# an extra API call, we try deleting with and without the header present
hdrs <- list(`x-ms-delete-snapshots`="include")
res <- try(do_container_op(container, blob, headers=hdrs, http_verb="DELETE"), silent=TRUE)
if(inherits(res, "try-error"))
res <- do_container_op(container, blob, headers=NULL, http_verb="DELETE")
invisible(res)
}
#' @rdname blob
#' @export
create_blob_dir <- function(container, dir)
{
# workaround: upload a zero-length file to the desired dir, then delete the file
destfile <- file.path(dir, basename(tempfile()))
opts <- options(azure_storage_progress_bar=FALSE)
on.exit(options(opts))
upload_blob(container, rawConnection(raw(0)), destfile)
delete_blob(container, destfile, confirm=FALSE)
invisible(NULL)
}
#' @rdname blob
#' @export
delete_blob_dir <- function(container, dir, recursive=FALSE, confirm=TRUE)
{
if(dir %in% c("/", ".") && !recursive)
return(invisible(NULL))
if(!delete_confirmed(confirm, paste0(container$endpoint$url, container$name, "/", dir), "directory"))
return(invisible(NULL))
if(recursive)
{
# delete everything under this directory
conts <- list_blobs(container, dir, recursive=TRUE, info="name")
for(n in rev(conts))
delete_blob(container, n, confirm=FALSE)
}
if(dir != "/" && blob_exists(container, dir))
delete_blob(container, dir, confirm=FALSE)
}
#' @rdname blob
#' @export
blob_exists <- function(container, blob)
{
res <- do_container_op(container, blob, headers = list(), http_verb = "HEAD", http_status_handler = "pass")
if(httr::status_code(res) == 404L)
return(FALSE)
httr::stop_for_status(res, storage_error_message(res))
return(TRUE)
}
#' @rdname blob
#' @export
blob_dir_exists <- function(container, dir)
{
if(dir == "/")
return(TRUE)
# multiple steps required to handle HNS-enabled and disabled accounts:
# 1. get blob properties
# - if no error, return (size == 0)
# - error can be because dir does not exist, OR HNS disabled
# 2. get dir listing
# - call API directly to avoid retrieving entire list
# - return (list is not empty)
props <- try(get_storage_properties(container, dir), silent=TRUE)
if(!inherits(props, "try-error"))
return(props[["content-length"]] == 0)
# ensure last char is always '/', to get list of blobs in a subdir
if(substr(dir, nchar(dir), nchar(dir)) != "/")
dir <- paste0(dir, "/")
opts <- list(comp="list", restype="container", maxresults=1, delimiter="/", prefix=dir)
res <- do_container_op(container, options=opts)
!is_empty(res$Blobs)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_client_funcs.R |
#' @details
#' `copy_url_to_storage` transfers the contents of the file at the specified HTTP\[S\] URL directly to storage, without requiring a temporary local copy to be made. `multicopy_url_to_storage` does the same, for multiple URLs at once. Currently methods for these are only implemented for blob storage.
#' @rdname file_transfer
#' @export
copy_url_to_storage <- function(container, src, dest, ...)
{
UseMethod("copy_url_to_storage")
}
#' @rdname file_transfer
#' @export
multicopy_url_to_storage <- function(container, src, dest, ...)
{
UseMethod("multicopy_url_to_storage")
}
#' @rdname file_transfer
#' @export
copy_url_to_storage.blob_container <- function(container, src, dest, ...)
{
copy_url_to_blob(container, src, dest, ...)
}
#' @rdname file_transfer
#' @export
multicopy_url_to_storage.blob_container <- function(container, src, dest, ...)
{
multicopy_url_to_blob(container, src, dest, ...)
}
#' @param async For `copy_url_to_blob` and `multicopy_url_to_blob`, whether the copy operation should be asynchronous (proceed in the background).
#' @param auth_header For `copy_url_to_blob` and `multicopy_url_to_blob`, an optional `Authorization` HTTP header to send to the source. This allows copying files that are not publicly available or otherwise have access restrictions.
#' @details
#' `copy_url_to_blob` transfers the contents of the file at the specified HTTP\[S\] URL directly to blob storage, without requiring a temporary local copy to be made. `multicopy_url_to_blob` does the same, for multiple URLs at once. These functions have a current file size limit of 256MB.
#' @rdname blob
#' @export
copy_url_to_blob <- function(container, src, dest, lease=NULL, async=FALSE, auth_header=NULL)
{
if(!is_url(src))
stop("Source must be a HTTP[S] url", call.=FALSE)
headers <- list(
`x-ms-copy-source`=src,
`x-ms-requires-sync`=!async
)
if(!is.null(auth_header))
headers[["x-ms-copy-source-authorization"]] <- auth_header
if(!is.null(lease))
headers[["x-ms-lease-id"]] <- as.character(lease)
do_container_op(container, dest, headers=headers, http_verb="PUT")
invisible(NULL)
}
#' @rdname blob
#' @export
multicopy_url_to_blob <- function(container, src, dest, lease=NULL, async=FALSE, max_concurrent_transfers=10,
auth_header=NULL)
{
if(missing(dest))
dest <- basename(src)
n_src <- length(src)
n_dest <- length(dest)
if(n_src == 0)
stop("No files to transfer", call.=FALSE)
if(n_dest != n_src)
stop("'dest' must contain one name per file in 'src'", call.=FALSE)
if(n_src == 1)
return(copy_url_to_blob(container, src, dest, lease=lease, async=async))
init_pool(max_concurrent_transfers)
pool_export("container", envir=environment())
pool_map(
function(s, d, lease, async, auth_header)
AzureStor::copy_url_to_blob(container, s, d, lease=lease, async=async, auth_header=auth_header),
src, dest,
MoreArgs=list(lease=lease, async=async, auth_header=auth_header)
)
invisible(NULL)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_copyurl.R |
#' Operations on blob leases
#'
#' Manage leases for blobs and blob containers.
#'
#' @param container A blob container object.
#' @param blob The name of an individual blob. If not supplied, the lease applies to the entire container.
#' @param duration For `acquire_lease`, The duration of the requested lease. For an indefinite duration, set this to -1.
#' @param lease For `acquire_lease` an optional proposed name of the lease; for `release_lease`, `renew_lease` and `change_lease`, the name of the existing lease.
#' @param period For `break_lease`, the period for which to break the lease.
#' @param new_lease For `change_lease`, the proposed name of the lease.
#'
#' @details
#' Leasing is a way to prevent a blob or container from being accidentally deleted. The duration of a lease can range from 15 to 60 seconds, or be indefinite.
#'
#' @return
#' For `acquire_lease` and `change_lease`, a string containing the lease ID.
#'
#' @seealso
#' [blob_container],
#' [Leasing a blob](https://docs.microsoft.com/en-us/rest/api/storageservices/lease-blob),
#' [Leasing a container](https://docs.microsoft.com/en-us/rest/api/storageservices/lease-container)
#'
#' @rdname lease
#' @export
acquire_lease <- function(container, blob="", duration=60, lease=NULL)
{
headers <- list("x-ms-lease-action"="acquire", "x-ms-lease-duration"=duration)
if(!is_empty(lease))
headers <- c(headers, list("x-ms-proposed-lease-id"=lease))
options <- list(comp="lease")
if(blob == "")
options$restype <- "container"
res <- do_container_op(container, blob, options=options, headers=headers,
http_verb="PUT", return_headers=TRUE)
res[["x-ms-lease-id"]]
}
#' @rdname lease
#' @export
break_lease <- function(container, blob="", period=NULL)
{
headers <- list("x-ms-lease-action"="break")
if(!is_empty(period))
headers <- c(headers, list("x-ms-lease-break-period"=period))
options <- list(comp="lease")
if(blob == "")
options$restype <- "container"
invisible(do_container_op(container, blob, options=options, headers=headers,
http_verb="PUT"))
}
#' @rdname lease
#' @export
release_lease <- function(container, blob="", lease)
{
headers <- list("x-ms-lease-id"=lease, "x-ms-lease-action"="release")
options <- list(comp="lease")
if(blob == "")
options$restype <- "container"
invisible(do_container_op(container, blob, options=options, headers=headers,
http_verb="PUT"))
}
#' @rdname lease
#' @export
renew_lease <- function(container, blob="", lease)
{
headers <- list("x-ms-lease-id"=lease, "x-ms-lease-action"="renew")
options <- list(comp="lease")
if(blob == "")
options$restype <- "container"
invisible(do_container_op(container, blob, options=options, headers=headers,
http_verb="PUT"))
}
#' @rdname lease
#' @export
change_lease <- function(container, blob="", lease, new_lease)
{
headers <- list("x-ms-lease-id"=lease, "x-ms-lease-action"="change", "x-ms-proposed-lease-id"=new_lease)
options <- list(comp="lease")
if(blob == "")
options$restype <- "container"
res <- do_container_op(container, blob, options=options, headers=headers,
http_verb="PUT", return_headers=TRUE)
res[["x-ms-lease-id"]]
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_lease.R |
#' Create, list and delete blob snapshots
#'
#' @param container A blob container.
#' @param blob The path/name of a blob.
#' @param ... For `create_blob_snapshot`, an optional list of name-value pairs that will be treated as the metadata for the snapshot. If no metadata is supplied, the metadata for the base blob is copied to the snapshot.
#' @param snapshot For `delete_blob_snapshot`, the specific snapshot to delete. This should be a datetime string, in the format `yyyy-mm-ddTHH:MM:SS.SSSSSSSZ`. To delete _all_ snapshots for the blob, set this to `"all"`.
#' @param confirm Whether to ask for confirmation on deleting a blob's snapshots.
#' @details
#' Blobs can have _snapshots_ associated with them, which are the contents and optional metadata for the blob at a given point in time. A snapshot is identified by the date and time on which it was created.
#'
#' `create_blob_snapshot` creates a new snapshot, `list_blob_snapshots` lists all the snapshots, and `delete_blob_snapshot` deletes a given snapshot or all snapshots for a blob.
#'
#' Note that snapshots are only supported if the storage account does NOT have hierarchical namespaces enabled.
#' @return
#' For `create_blob_snapshot`, the datetime string that identifies the snapshot.
#'
#' For `list_blob_snapshots` a vector of such strings, or NULL if the blob has no snapshots.
#' @seealso
#' Other AzureStor functions that support blob snapshots by passing a `snapshot` argument: [download_blob], [get_storage_properties], [get_storage_metadata]
#' @examples
#' \dontrun{
#'
#' cont <- blob_container("https://mystorage.blob.core.windows.net/mycontainer", key="access_key")
#'
#' snap_id <- create_blob_snapshot(cont, "myfile", tag1="value1", tag2="value2")
#'
#' list_blob_snapshots(cont, "myfile")
#'
#' get_storage_properties(cont, "myfile", snapshot=snap_id)
#'
#' # returns list(tag1="value1", tag2="value2")
#' get_storage_metadata(cont, "myfile", snapshot=snap_id)
#'
#' download_blob(cont, "myfile", snapshot=snap_id)
#'
#' # delete all snapshots
#' delete_blob_snapshots(cont, "myfile", snapshot="all")
#'
#' }
#' @rdname snapshot
#' @export
create_blob_snapshot <- function(container, blob, ...)
{
opts <- list(comp="snapshot")
meta <- list(...)
hdrs <- if(!is_empty(meta))
set_classic_metadata_headers(meta)
else list()
res <- do_container_op(container, blob, options=opts, headers=hdrs, http_verb="PUT", return_headers=TRUE)
res$`x-ms-snapshot`
}
#' @rdname snapshot
#' @export
list_blob_snapshots <- function(container, blob)
{
opts <- list(comp="list", restype="container", include="snapshots", prefix=as.character(blob))
res <- do_container_op(container, options=opts)
lst <- res$Blobs
while(length(res$NextMarker) > 0)
{
opts$marker <- res$NextMarker[[1]]
res <- do_container_op(container, options=opts)
lst <- c(lst, res$Blobs)
}
unname(unlist(lapply(lst, function(bl) bl$Snapshot[[1]])))
}
#' @rdname snapshot
#' @export
delete_blob_snapshot <- function(container, blob, snapshot, confirm=TRUE)
{
if(!delete_confirmed(confirm, snapshot, "blob snapshot"))
return(invisible(NULL))
hdrs <- opts <- list()
if(snapshot == "all")
hdrs <- list(`x-ms-delete-snapshots`="only")
else opts <- list(snapshot=snapshot)
invisible(do_container_op(container, blob, options=opts, headers=hdrs, http_verb="DELETE"))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_snapshot.R |
upload_blob_internal <- function(container, src, dest, type, blocksize, lease=NULL, put_md5=FALSE, append=FALSE)
{
src <- normalize_src(src, put_md5)
on.exit(close(src$con))
switch(type,
"BlockBlob"=upload_block_blob(container, src, dest, blocksize, lease),
"AppendBlob"=upload_append_blob(container, src, dest, blocksize, lease, append),
stop("Unknown blob type: ", type, call.=FALSE)
)
invisible(NULL)
}
download_blob_internal <- function(container, src, dest, blocksize=2^24, overwrite=FALSE, lease=NULL,
check_md5=FALSE, snapshot=NULL, version=NULL)
{
headers <- list()
if(!is.null(lease))
headers[["x-ms-lease-id"]] <- as.character(lease)
opts <- list()
if(!is.null(snapshot))
opts$snapshot <- snapshot
if(!is.null(version))
opts$versionid <- version
dest <- init_download_dest(dest, overwrite)
on.exit(dispose_download_dest(dest))
# get file size (for progress bar) and MD5 hash
props <- get_storage_properties(container, src, snapshot=snapshot, version=version)
size <- as.numeric(props[["content-length"]])
src_md5 <- props[["content-md5"]]
bar <- storage_progress_bar$new(size, "down")
offset <- 0
while(offset < size)
{
headers$Range <- sprintf("bytes=%.0f-%.0f", offset, offset + blocksize - 1)
res <- do_container_op(container, src, options=opts, headers=headers, progress=bar$update(),
http_status_handler="pass")
httr::stop_for_status(res, storage_error_message(res))
writeBin(httr::content(res, as="raw"), dest)
offset <- offset + blocksize
bar$offset <- offset
}
bar$close()
if(check_md5)
do_md5_check(dest, src_md5)
if(inherits(dest, "null_dest")) rawConnectionValue(dest) else invisible(NULL)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_transfer_internal.R |
upload_block_blob <- function(container, src, dest, blocksize, lease)
{
bar <- storage_progress_bar$new(src$size, "up")
headers <- list("x-ms-blob-type"="BlockBlob")
if(!is.null(lease))
headers[["x-ms-lease-id"]] <- as.character(lease)
# upload each block
blocklist <- list()
base_id <- openssl::md5(dest)
i <- 1
repeat
{
body <- readBin(src$con, "raw", blocksize)
thisblock <- length(body)
if(thisblock == 0)
break
# ensure content-length is never exponential notation
headers[["content-length"]] <- sprintf("%.0f", thisblock)
headers[["content-md5"]] <- encode_md5(body)
id <- openssl::base64_encode(sprintf("%s-%010d", base_id, i))
opts <- list(comp="block", blockid=id)
do_container_op(container, dest, headers=headers, body=body, options=opts, progress=bar$update(),
http_verb="PUT")
blocklist <- c(blocklist, list(Latest=list(id)))
bar$offset <- bar$offset + blocksize
i <- i + 1
}
bar$close()
# update block list
body <- as.character(xml2::as_xml_document(list(BlockList=blocklist)))
headers <- list(
"content-length"=sprintf("%.0f", nchar(body)),
"x-ms-blob-content-type"=src$content_type,
"content-md5"=encode_md5(charToRaw(body))
)
if(!is.null(src$md5))
headers[["x-ms-blob-content-md5"]] <- src$md5
if(!is.null(lease))
headers[["x-ms-lease-id"]] <- as.character(lease)
do_container_op(container, dest, headers=headers, body=body, options=list(comp="blocklist"),
http_verb="PUT")
}
upload_append_blob <- function(container, src, dest, blocksize, lease, append)
{
bar <- storage_progress_bar$new(src$size, "up")
headers <- list("x-ms-blob-type"="AppendBlob")
if(!is.null(lease))
headers[["x-ms-lease-id"]] <- as.character(lease)
# create the blob if necessary
if(!append)
do_container_op(container, dest, headers=headers, http_verb="PUT")
# upload each block
blocklist <- list()
base_id <- openssl::md5(dest)
i <- 1
repeat
{
body <- readBin(src$con, "raw", blocksize)
thisblock <- length(body)
if(thisblock == 0)
break
# ensure content-length is never exponential notation
headers[["content-length"]] <- sprintf("%.0f", thisblock)
headers[["content-md5"]] <- encode_md5(body)
id <- openssl::base64_encode(sprintf("%s-%010d", base_id, i))
opts <- list(comp="appendblock")
do_container_op(container, dest, headers=headers, body=body, options=opts, progress=bar$update(),
http_verb="PUT")
bar$offset <- bar$offset + blocksize
i <- i + 1
}
bar$close()
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_upload_internal.R |
#' List and delete blob versions
#'
#' @param container A blob container.
#' @param blob The path/name of a blob.
#' @param version For `delete_blob_version`, the specific version to delete. This should be a datetime string, in the format `yyyy-mm-ddTHH:MM:SS.SSSSSSSZ`.
#' @param confirm Whether to ask for confirmation on deleting a blob version.
#' @details
#' A version captures the state of a blob at a given point in time. Each version is identified with a version ID. When blob versioning is enabled for a storage account, Azure Storage automatically creates a new version with a unique ID when a blob is first created and each time that the blob is subsequently modified.
#'
#' A version ID can identify the current version or a previous version. A blob can have only one current version at a time.
#'
#' When you create a new blob, a single version exists, and that version is the current version. When you modify an existing blob, the current version becomes a previous version. A new version is created to capture the updated state, and that new version is the current version. When you delete a blob, the current version of the blob becomes a previous version, and there is no longer a current version. Any previous versions of the blob persist.
#'
#' Versions are different to [snapshots][list_blob_snapshots]:
#' - A new snapshot has to be explicitly created via `create_blob_snapshot`. A new blob version is automatically created whenever the base blob is modified (and hence there is no `create_blob_version` function).
#' - Deleting the base blob will also delete all snapshots for that blob, while blob versions will be retained (but will typically be inaccessible).
#' - Snapshots are only available for storage accounts with hierarchical namespaces disabled, while versioning can be used with any storage account.
#'
#' @return
#' For `list_blob_versions`, a vector of datetime strings which are the IDs of each version.
#' @rdname version
#' @export
list_blob_versions <- function(container, blob)
{
opts <- list(comp="list", restype="container", include="versions", prefix=as.character(blob))
res <- do_container_op(container, options=opts)
lst <- res$Blobs
while(length(res$NextMarker) > 0)
{
opts$marker <- res$NextMarker[[1]]
res <- do_container_op(container, options=opts)
lst <- c(lst, res$Blobs)
}
unname(unlist(lapply(lst, function(bl) bl$VersionId[[1]])))
}
#' @rdname version
#' @export
delete_blob_version <- function(container, blob, version, confirm=TRUE)
{
if(!delete_confirmed(confirm, version, "blob version"))
return(invisible(NULL))
opts <- list(versionid=version)
invisible(do_container_op(container, blob, options=opts, http_verb="DELETE"))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/blob_version.R |
#' Create a storage endpoint object
#'
#' Create a storage endpoint object, for interacting with blob, file, table, queue or ADLSgen2 storage.
#'
#' @param endpoint The URL (hostname) for the endpoint. This must be of the form `http[s]://{account-name}.{type}.{core-host-name}`, where `type` is one of `"dfs"` (corresponding to ADLSgen2), `"blob"`, `"file"`, `"queue"` or `"table"`. On the public Azure cloud, endpoints will be of the form `https://{account-name}.{type}.core.windows.net`.
#' @param key The access key for the storage account.
#' @param token An Azure Active Directory (AAD) authentication token. This can be either a string, or an object of class AzureToken created by [AzureRMR::get_azure_token]. The latter is the recommended way of doing it, as it allows for automatic refreshing of expired tokens.
#' @param sas A shared access signature (SAS) for the account.
#' @param api_version The storage API version to use when interacting with the host. Defaults to `"2019-07-07"`.
#' @param service For `storage_endpoint`, the service endpoint type: either "blob", "file", "adls", "queue" or "table". If this is missing, it is inferred from the endpoint hostname.
#' @param x For the print method, a storage endpoint object.
#' @param ... For the print method, further arguments passed to lower-level functions.
#'
#' @details
#' This is the starting point for the client-side storage interface in AzureRMR. `storage_endpoint` is a generic function to create an endpoint for any type of Azure storage while `adls_endpoint`, `blob_endpoint` and `file_endpoint` create endpoints for those types.
#'
#' If multiple authentication objects are supplied, they are used in this order of priority: first an access key, then an AAD token, then a SAS. If no authentication objects are supplied, only public (anonymous) access to the endpoint is possible.
#'
#' @section Storage emulators:
#' AzureStor supports connecting to the [Azure SDK](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator) and [Azurite](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azurite) emulators for blob and queue storage. To connect, pass the full URL of the endpoint, including the account name, to the `blob_endpoint` and `queue_endpoint` methods (the latter from the AzureQstor package). The warning about an unrecognised endpoint can be ignored. See the linked pages, and the examples below, for details on how to authenticate with the emulator.
#'
#' Note that the Azure SDK emulator is no longer being actively developed; it's recommended to use Azurite for development work.
#'
#' @return
#' `storage_endpoint` returns an object of S3 class `"adls_endpoint"`, `"blob_endpoint"`, `"file_endpoint"`, `"queue_endpoint"` or `"table_endpoint"` depending on the type of endpoint. All of these also inherit from class `"storage_endpoint"`. `adls_endpoint`, `blob_endpoint` and `file_endpoint` return an object of the respective class.
#'
#' Note that while endpoint classes exist for all storage types, currently AzureStor only includes methods for interacting with ADLSgen2, blob and file storage.
#'
#' @seealso
#' [create_storage_account], [adls_filesystem], [create_adls_filesystem], [file_share], [create_file_share], [blob_container], [create_blob_container]
#'
#' @examples
#' \dontrun{
#'
#' # obtaining an endpoint from the storage account resource object
#' stor <- AzureRMR::get_azure_login()$
#' get_subscription("sub_id")$
#' get_resource_group("rgname")$
#' get_storage_account("mystorage")
#' stor$get_blob_endpoint()
#'
#' # creating an endpoint standalone
#' blob_endpoint("https://mystorage.blob.core.windows.net/", key="access_key")
#'
#' # using an OAuth token for authentication -- note resource is 'storage.azure.com'
#' token <- AzureAuth::get_azure_token("https://storage.azure.com",
#' "myaadtenant", "app_id", "password")
#' adls_endpoint("https://myadlsstorage.dfs.core.windows.net/", token=token)
#'
#'
#' ## Azurite storage emulator:
#'
#' # connecting to Azurite with the default account and key (these also work for the Azure SDK)
#' azurite_account <- "devstoreaccount1"
#' azurite_key <-
#' "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
#' blob_endpoint(paste0("http://127.0.0.1:10000/", azurite_account), key=azurite_key)
#'
#' # to use a custom account name and key, set the AZURITE_ACCOUNTS env var before starting Azurite
#' Sys.setenv(AZURITE_ACCOUNTS="account1:key1")
#' blob_endpoint("http://127.0.0.1:10000/account1", key="key1")
#'
#' }
#' @aliases endpoint blob_endpoint file_endpoint queue_endpoint table_endpoint
#' @export
storage_endpoint <- function(endpoint, key=NULL, token=NULL, sas=NULL, api_version, service)
{
if(missing(service))
{
service <- sapply(c("blob", "file", "queue", "table", "adls"),
function(x) is_endpoint_url(endpoint, x))
if(!any(service))
stop("Unknown endpoint service", call.=FALSE)
service <- names(service)[service]
}
if(missing(api_version))
api_version <- getOption("azure_storage_api_version")
obj <- list(url=endpoint, key=key, token=token, sas=sas, api_version=api_version)
class(obj) <- c(paste0(service, "_endpoint"), "storage_endpoint")
obj
}
#' @rdname storage_endpoint
#' @export
blob_endpoint <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"))
{
obj <- list(url=endpoint, key=key, token=token, sas=sas, api_version=api_version)
class(obj) <- c("blob_endpoint", "storage_endpoint")
obj
}
#' @rdname storage_endpoint
#' @export
file_endpoint <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"))
{
obj <- list(url=endpoint, key=key, token=token, sas=sas, api_version=api_version)
class(obj) <- c("file_endpoint", "storage_endpoint")
obj
}
#' @rdname storage_endpoint
#' @export
adls_endpoint <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"))
{
obj <- list(url=endpoint, key=key, token=token, sas=sas, api_version=api_version)
class(obj) <- c("adls_endpoint", "storage_endpoint")
obj
}
#' @rdname storage_endpoint
#' @export
print.storage_endpoint <- function(x, ...)
{
type <- sub("_endpoint$", "", class(x)[1])
cat(sprintf("Azure %s storage endpoint\n", type))
cat(sprintf("URL: %s\n", x$url))
if(!is_empty(x$key))
cat("Access key: <hidden>\n")
else cat("Access key: <none supplied>\n")
if(!is_empty(x$token))
{
cat("Azure Active Directory token:\n")
print(x$token)
}
else cat("Azure Active Directory token: <none supplied>\n")
if(!is_empty(x$sas))
cat("Account shared access signature: <hidden>\n")
else cat("Account shared access signature: <none supplied>\n")
cat(sprintf("Storage API version: %s\n", x$api_version))
invisible(x)
}
#' @rdname storage_endpoint
#' @export
print.adls_endpoint <- function(x, ...)
{
cat("Azure Data Lake Storage Gen2 endpoint\n")
cat(sprintf("URL: %s\n", x$url))
if(!is_empty(x$key))
cat("Access key: <hidden>\n")
else cat("Access key: <none supplied>\n")
if(!is_empty(x$token))
{
cat("Azure Active Directory token:\n")
print(x$token)
}
else cat("Azure Active Directory token: <none supplied>\n")
if(!is_empty(x$sas))
cat("Account shared access signature: <hidden>\n")
else cat("Account shared access signature: <none supplied>\n")
cat(sprintf("Storage API version: %s\n", x$api_version))
invisible(x)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/client_endpoint.R |
#' Storage client generics
#'
#' @param endpoint A storage endpoint object, or for the character methods, a string giving the full URL to the container.
#' @param container A storage container object.
#' @param key,token,sas For the character methods, authentication credentials for the container: either an access key, an Azure Active Directory (AAD) token, or a SAS. If multiple arguments are supplied, a key takes priority over a token, which takes priority over a SAS.
#' @param name For the storage container management methods, a container name.
#' @param file,dir For the storage object management methods, a file or directory name.
#' @param confirm For the deletion methods, whether to ask for confirmation first.
#' @param ... Further arguments to pass to lower-level functions.
#'
#' @details
#' These methods provide a framework for all storage management tasks supported by AzureStor. They dispatch to the appropriate functions for each type of storage.
#'
#' Storage container management methods:
#' - `storage_container` dispatches to `blob_container`, `file_share` or `adls_filesystem`
#' - `create_storage_container` dispatches to `create_blob_container`, `create_file_share` or `create_adls_filesystem`
#' - `delete_storage_container` dispatches to `delete_blob_container`, `delete_file_share` or `delete_adls_filesystem`
#' - `list_storage_containers` dispatches to `list_blob_containers`, `list_file_shares` or `list_adls_filesystems`
#'
#' Storage object management methods:
#' - `list_storage_files` dispatches to `list_blobs`, `list_azure_files` or `list_adls_files`
#' - `create_storage_dir` dispatches to `create_blob_dir`, `create_azure_dir` or `create_adls_dir`
#' - `delete_storage_dir` dispatches to `delete_blob_dir`, `delete_azure_dir` or `delete_adls_dir`
#' - `delete_storage_file` dispatches to `delete_blob`, `delete_azure_file` or `delete_adls_file`
#' - `storage_file_exists` dispatches to `blob_exists`, `azure_file_exists` or `adls_file_exists`
#' - `storage_dir_exists` dispatches to `blob_dir_exists`, `azure_dir_exists` or `adls_dir_exists`
#' - `create_storage_snapshot` dispatches to `create_blob_snapshot`
#' - `list_storage_snapshots` dispatches to `list_blob_snapshots`
#' - `delete_storage_snapshot` dispatches to `delete_blob_snapshot`
#' - `list_storage_versions` dispatches to `list_blob_versions`
#' - `delete_storage_version` dispatches to `delete_blob_version`
#'
#' @seealso
#' [storage_endpoint], [blob_container], [file_share], [adls_filesystem]
#'
#' [list_blobs], [list_azure_files], [list_adls_files]
#'
#' Similar generics exist for file transfer methods; see the page for [storage_download].
#'
#' @examples
#' \dontrun{
#'
#' # storage endpoints for the one account
#' bl <- storage_endpoint("https://mystorage.blob.core.windows.net/", key="access_key")
#' fl <- storage_endpoint("https://mystorage.file.core.windows.net/", key="access_key")
#'
#' list_storage_containers(bl)
#' list_storage_containers(fl)
#'
#' # creating containers
#' cont <- create_storage_container(bl, "newblobcontainer")
#' fs <- create_storage_container(fl, "newfileshare")
#'
#' # creating directories (if possible)
#' create_storage_dir(cont, "newdir") # will error out
#' create_storage_dir(fs, "newdir")
#'
#' # transfer a file
#' storage_upload(bl, "~/file.txt", "storage_file.txt")
#' storage_upload(cont, "~/file.txt", "newdir/storage_file.txt")
#'
#' }
#' @aliases storage_generics
#' @rdname generics
#' @export
storage_container <- function(endpoint, ...)
UseMethod("storage_container")
#' @rdname generics
#' @export
storage_container.blob_endpoint <- function(endpoint, name, ...)
blob_container(endpoint, name, ...)
#' @rdname generics
#' @export
storage_container.file_endpoint <- function(endpoint, name, ...)
file_share(endpoint, name, ...)
#' @rdname generics
#' @export
storage_container.adls_endpoint <- function(endpoint, name, ...)
adls_filesystem(endpoint, name, ...)
#' @rdname generics
#' @export
storage_container.character <- function(endpoint, key=NULL, token=NULL, sas=NULL, ...)
{
lst <- parse_storage_url(endpoint)
endpoint <- storage_endpoint(lst[[1]], key=key, token=token, sas=sas, ...)
storage_container(endpoint, lst[[2]])
}
# create container
#' @rdname generics
#' @export
create_storage_container <- function(endpoint, ...)
UseMethod("create_storage_container")
#' @rdname generics
#' @export
create_storage_container.blob_endpoint <- function(endpoint, name, ...)
create_blob_container(endpoint, name, ...)
#' @rdname generics
#' @export
create_storage_container.file_endpoint <- function(endpoint, name, ...)
create_file_share(endpoint, name, ...)
#' @rdname generics
#' @export
create_storage_container.adls_endpoint <- function(endpoint, name, ...)
create_adls_filesystem(endpoint, name, ...)
#' @rdname generics
#' @export
create_storage_container.storage_container <- function(endpoint, ...)
create_storage_container(endpoint$endpoint, endpoint$name, ...)
#' @rdname generics
#' @export
create_storage_container.character <- function(endpoint, key=NULL, token=NULL, sas=NULL, ...)
{
lst <- parse_storage_url(endpoint)
endpoint <- storage_endpoint(lst[[1]], key=key, token=token, sas=sas, ...)
create_storage_container(endpoint, lst[[2]])
}
# delete container
#' @rdname generics
#' @export
delete_storage_container <- function(endpoint, ...)
UseMethod("delete_storage_container")
#' @rdname generics
#' @export
delete_storage_container.blob_endpoint <- function(endpoint, name, ...)
delete_blob_container(endpoint, name, ...)
#' @rdname generics
#' @export
delete_storage_container.file_endpoint <- function(endpoint, name, ...)
delete_file_share(endpoint, name, ...)
#' @rdname generics
#' @export
delete_storage_container.adls_endpoint <- function(endpoint, name, ...)
delete_adls_filesystem(endpoint, name, ...)
#' @rdname generics
#' @export
delete_storage_container.storage_container <- function(endpoint, ...)
delete_storage_container(endpoint$endpoint, endpoint$name, ...)
#' @rdname generics
#' @export
delete_storage_container.character <- function(endpoint, key=NULL, token=NULL, sas=NULL, confirm=TRUE, ...)
{
lst <- parse_storage_url(endpoint)
endpoint <- storage_endpoint(lst[[1]], key=key, token=token, sas=sas, ...)
delete_storage_container(endpoint, lst[[2]], confirm=confirm)
}
# list containers
#' @rdname generics
#' @export
list_storage_containers <- function(endpoint, ...)
UseMethod("list_storage_containers")
#' @rdname generics
#' @export
list_storage_containers.blob_endpoint <- function(endpoint, ...)
list_blob_containers(endpoint, ...)
#' @rdname generics
#' @export
list_storage_containers.file_endpoint <- function(endpoint, ...)
list_file_shares(endpoint, ...)
#' @rdname generics
#' @export
list_storage_containers.adls_endpoint <- function(endpoint, ...)
list_adls_filesystems(endpoint, ...)
#' @rdname generics
#' @export
list_storage_containers.character <- function(endpoint, key=NULL, token=NULL, sas=NULL, ...)
{
lst <- parse_storage_url(endpoint)
endpoint <- storage_endpoint(lst[[1]], key=key, token=token, sas=sas, ...)
list_storage_containers(endpoint, lst[[2]])
}
# list files
#' @rdname generics
#' @export
list_storage_files <- function(container, ...)
UseMethod("list_storage_files")
#' @rdname generics
#' @export
list_storage_files.blob_container <- function(container, ...)
list_blobs(container, ...)
#' @rdname generics
#' @export
list_storage_files.file_share <- function(container, ...)
list_azure_files(container, ...)
#' @rdname generics
#' @export
list_storage_files.adls_filesystem <- function(container, ...)
list_adls_files(container, ...)
# create directory
#' @rdname generics
#' @export
create_storage_dir <- function(container, ...)
UseMethod("create_storage_dir")
#' @rdname generics
#' @export
create_storage_dir.blob_container <- function(container, dir, ...)
create_blob_dir(container, dir, ...)
#' @rdname generics
#' @export
create_storage_dir.file_share <- function(container, dir, ...)
create_azure_dir(container, dir, ...)
#' @rdname generics
#' @export
create_storage_dir.adls_filesystem <- function(container, dir, ...)
create_adls_dir(container, dir, ...)
# delete directory
#' @rdname generics
#' @export
delete_storage_dir <- function(container, ...)
UseMethod("delete_storage_dir")
#' @rdname generics
#' @export
delete_storage_dir.blob_container <- function(container, dir, ...)
delete_blob_dir(container, dir, ...)
#' @rdname generics
#' @export
delete_storage_dir.file_share <- function(container, dir, ...)
delete_azure_dir(container, dir, ...)
#' @rdname generics
#' @export
delete_storage_dir.adls_filesystem <- function(container, dir, confirm=TRUE, ...)
delete_adls_dir(container, dir, confirm=confirm, ...)
# delete file
#' @rdname generics
#' @export
delete_storage_file <- function(container, ...)
UseMethod("delete_storage_file")
#' @rdname generics
#' @export
delete_storage_file.blob_container <- function(container, file, ...)
delete_blob(container, file, ...)
#' @rdname generics
#' @export
delete_storage_file.file_share <- function(container, file, ...)
delete_azure_file(container, file, ...)
#' @rdname generics
#' @export
delete_storage_file.adls_filesystem <- function(container, file, confirm=TRUE, ...)
delete_adls_file(container, file, confirm=confirm, ...)
# check existence
#' @rdname generics
#' @export
storage_file_exists <- function(container, file, ...)
UseMethod("storage_file_exists")
#' @rdname generics
#' @export
storage_file_exists.blob_container <- function(container, file, ...)
blob_exists(container, file, ...)
#' @rdname generics
#' @export
storage_file_exists.file_share <- function(container, file, ...)
azure_file_exists(container, file, ...)
#' @rdname generics
#' @export
storage_file_exists.adls_filesystem <- function(container, file, ...)
adls_file_exists(container, file, ...)
# check dir existence
#' @rdname generics
#' @export
storage_dir_exists <- function(container, dir, ...)
UseMethod("storage_dir_exists")
#' @rdname generics
#' @export
storage_dir_exists.blob_container <- function(container, dir, ...)
blob_dir_exists(container, dir, ...)
#' @rdname generics
#' @export
storage_dir_exists.file_share <- function(container, dir, ...)
azure_dir_exists(container, dir, ...)
#' @rdname generics
#' @export
storage_dir_exists.adls_filesystem <- function(container, dir, ...)
adls_dir_exists(container, dir, ...)
# snapshots
#' @rdname generics
#' @export
create_storage_snapshot <- function(container, file, ...)
UseMethod("create_storage_snapshot")
#' @rdname generics
#' @export
create_storage_snapshot.blob_container <- function(container, file, ...)
create_blob_snapshot(container, file, ...)
#' @rdname generics
#' @export
list_storage_snapshots <- function(container, ...)
UseMethod("list_storage_snapshots")
#' @rdname generics
#' @export
list_storage_snapshots.blob_container <- function(container, ...)
list_blob_snapshots(container, ...)
#' @rdname generics
#' @export
delete_storage_snapshot <- function(container, file, ...)
UseMethod("delete_storage_snapshot")
#' @rdname generics
#' @export
delete_storage_snapshot.blob_container <- function(container, file, ...)
delete_blob_snapshot(container, file, ...)
# versions
#' @rdname generics
#' @export
list_storage_versions <- function(container, ...)
UseMethod("list_storage_versions")
#' @rdname generics
#' @export
list_storage_versions.blob_container <- function(container, ...)
list_blob_versions(container, ...)
#' @rdname generics
#' @export
delete_storage_version <- function(container, file, ...)
UseMethod("delete_storage_version")
#' @rdname generics
#' @export
delete_storage_version.blob_container <- function(container, file, ...)
delete_blob_version(container, file, ...)
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/client_generics.R |
init_download_dest <- function(dest, overwrite)
{
UseMethod("init_download_dest")
}
init_download_dest.character <- function(dest, overwrite)
{
if(!overwrite && file.exists(dest))
stop("Destination file exists and overwrite is FALSE", call.=FALSE)
if(!dir.exists(dirname(dest)))
dir.create(dirname(dest), recursive=TRUE)
f <- file(dest, "w+b")
structure(f, class=c("file_dest", class(f)))
}
init_download_dest.rawConnection <- function(dest, overwrite)
{
structure(dest, class=c("conn_dest", class(dest)))
}
init_download_dest.NULL <- function(dest, overwrite)
{
con <- rawConnection(raw(0), "w+b")
structure(con, class=c("null_dest", class(con)))
}
dispose_download_dest <- function(dest)
{
UseMethod("dispose_download_dest")
}
dispose_download_dest.file_dest <- function(dest)
{
close(dest)
}
dispose_download_dest.conn_dest <- function(dest)
{
seek(dest, 0)
}
dispose_download_dest.null_dest <- function(dest)
{
close(dest)
}
do_md5_check <- function(dest, src_md5)
{
if(is.null(src_md5))
{
warning("Source file MD5 hash not set", call.=FALSE)
return()
}
seek(dest, 0)
dest_md5 <- encode_md5(dest)
if(dest_md5 != src_md5)
stop("Destination and source MD5 hashes do not match", call.=FALSE)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/download_dest.R |
#' Operations on a file endpoint
#'
#' Get, list, create, or delete file shares.
#'
#' @param endpoint Either a file endpoint object as created by [storage_endpoint], or a character string giving the URL of the endpoint.
#' @param key,token,sas If an endpoint object is not supplied, authentication credentials: either an access key, an Azure Active Directory (AAD) token, or a SAS, in that order of priority.
#' @param api_version If an endpoint object is not supplied, the storage API version to use when interacting with the host. Currently defaults to `"2019-07-07"`.
#' @param name The name of the file share to get, create, or delete.
#' @param confirm For deleting a share, whether to ask for confirmation.
#' @param x For the print method, a file share object.
#' @param ... Further arguments passed to lower-level functions.
#'
#' @details
#' You can call these functions in a couple of ways: by passing the full URL of the share, or by passing the endpoint object and the name of the share as a string.
#'
#' @return
#' For `file_share` and `create_file_share`, an S3 object representing an existing or created share respectively.
#'
#' For `list_file_shares`, a list of such objects.
#'
#' @seealso
#' [storage_endpoint], [az_storage], [storage_container]
#'
#' @examples
#' \dontrun{
#'
#' endp <- file_endpoint("https://mystorage.file.core.windows.net/", key="access_key")
#'
#' # list file shares
#' list_file_shares(endp)
#'
#' # get, create, and delete a file share
#' file_share(endp, "myshare")
#' create_file_share(endp, "newshare")
#' delete_file_share(endp, "newshare")
#'
#' # alternative way to do the same
#' file_share("https://mystorage.file.file.windows.net/myshare", key="access_key")
#' create_file_share("https://mystorage.file.core.windows.net/newshare", key="access_key")
#' delete_file_share("https://mystorage.file.core.windows.net/newshare", key="access_key")
#'
#' }
#' @rdname file_share
#' @export
file_share <- function(endpoint, ...)
{
UseMethod("file_share")
}
#' @rdname file_share
#' @export
file_share.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
do.call(file_share, generate_endpoint_container(endpoint, key, token, sas, api_version))
}
#' @rdname file_share
#' @export
file_share.file_endpoint <- function(endpoint, name, ...)
{
obj <- list(name=name, endpoint=endpoint)
class(obj) <- c("file_share", "storage_container")
obj
}
#' @rdname file_share
#' @export
print.file_share <- function(x, ...)
{
cat("Azure file share '", x$name, "'\n", sep="")
url <- httr::parse_url(x$endpoint$url)
url$path <- x$name
cat(sprintf("URL: %s\n", httr::build_url(url)))
if(!is_empty(x$endpoint$key))
cat("Access key: <hidden>\n")
else cat("Access key: <none supplied>\n")
if(!is_empty(x$endpoint$token))
{
cat("Azure Active Directory token:\n")
print(x$endpoint$token)
}
else cat("Azure Active Directory token: <none supplied>\n")
if(!is_empty(x$endpoint$sas))
cat("Account shared access signature: <hidden>\n")
else cat("Account shared access signature: <none supplied>\n")
cat(sprintf("Storage API version: %s\n", x$endpoint$api_version))
invisible(x)
}
#' @rdname file_share
#' @export
list_file_shares <- function(endpoint, ...)
{
UseMethod("list_file_shares")
}
#' @rdname file_share
#' @export
list_file_shares.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
do.call(list_file_shares, generate_endpoint_container(endpoint, key, token, sas, api_version))
}
#' @rdname file_share
#' @export
list_file_shares.file_endpoint <- function(endpoint, ...)
{
res <- call_storage_endpoint(endpoint, "/", options=list(comp="list"))
lst <- lapply(res$Shares, function(cont) file_share(endpoint, cont$Name[[1]]))
while(length(res$NextMarker) > 0)
{
res <- call_storage_endpoint(endpoint, "/", options=list(comp="list", marker=res$NextMarker[[1]]))
lst <- c(lst, lapply(res$Shares, function(cont) file_share(endpoint, cont$Name[[1]])))
}
named_list(lst)
}
#' @rdname file_share
#' @export
create_file_share <- function(endpoint, ...)
{
UseMethod("create_file_share")
}
#' @rdname file_share
#' @export
create_file_share.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
endp <- generate_endpoint_container(endpoint, key, token, sas, api_version)
create_file_share(endp$endpoint, endp$name, ...)
}
#' @rdname file_share
#' @export
create_file_share.file_share <- function(endpoint, ...)
{
create_file_share(endpoint$endpoint, endpoint$name)
}
#' @rdname file_share
#' @export
create_file_share.file_endpoint <- function(endpoint, name, ...)
{
obj <- file_share(endpoint, name)
do_container_op(obj, options=list(restype="share"), headers=list(...), http_verb="PUT")
obj
}
#' @rdname file_share
#' @export
delete_file_share <- function(endpoint, ...)
{
UseMethod("delete_file_share")
}
#' @rdname file_share
#' @export
delete_file_share.character <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"),
...)
{
endp <- generate_endpoint_container(endpoint, key, token, sas, api_version)
delete_file_share(endp$endpoint, endp$name, ...)
}
#' @rdname file_share
#' @export
delete_file_share.file_share <- function(endpoint, ...)
{
delete_file_share(endpoint$endpoint, endpoint$name, ...)
}
#' @rdname file_share
#' @export
delete_file_share.file_endpoint <- function(endpoint, name, confirm=TRUE, ...)
{
if(!delete_confirmed(confirm, paste0(endpoint$url, name), "share"))
return(invisible(NULL))
obj <- file_share(endpoint, name)
invisible(do_container_op(obj, options=list(restype="share"), http_verb="DELETE"))
}
#' Operations on a file share
#'
#' Upload, download, or delete a file; list files in a directory; create or delete directories; check file existence.
#'
#' @param share A file share object.
#' @param dir,file A string naming a directory or file respectively.
#' @param info Whether to return names only, or all information in a directory listing.
#' @param src,dest The source and destination files for uploading and downloading. See 'Details' below.
#' @param confirm Whether to ask for confirmation on deleting a file or directory.
#' @param recursive For the multiupload/download functions, whether to recursively transfer files in subdirectories. For `list_azure_dir`, whether to include the contents of any subdirectories in the listing. For `create_azure_dir`, whether to recursively create each component of a nested directory path. For `delete_azure_dir`, whether to delete a subdirectory's contents first. Note that in all cases this can be slow, so try to use a non-recursive solution if possible.
#' @param create_dir For the uploading functions, whether to create the destination directory if it doesn't exist. Again for the file storage API this can be slow, hence is optional.
#' @param blocksize The number of bytes to upload/download per HTTP(S) request.
#' @param overwrite When downloading, whether to overwrite an existing destination file.
#' @param use_azcopy Whether to use the AzCopy utility from Microsoft to do the transfer, rather than doing it in R.
#' @param max_concurrent_transfers For `multiupload_azure_file` and `multidownload_azure_file`, the maximum number of concurrent file transfers. Each concurrent file transfer requires a separate R process, so limit this if you are low on memory.
#' @param prefix For `list_azure_files`, filters the result to return only files and directories whose name begins with this prefix.
#' @param put_md5 For uploading, whether to compute the MD5 hash of the file(s). This will be stored as part of the file's properties.
#' @param check_md5 For downloading, whether to verify the MD5 hash of the downloaded file(s). This requires that the file's `Content-MD5` property is set. If this is TRUE and the `Content-MD5` property is missing, a warning is generated.
#'
#' @details
#' `upload_azure_file` and `download_azure_file` are the workhorse file transfer functions for file storage. They each take as inputs a _single_ filename as the source for uploading/downloading, and a single filename as the destination. Alternatively, for uploading, `src` can be a [textConnection] or [rawConnection] object; and for downloading, `dest` can be NULL or a `rawConnection` object. If `dest` is NULL, the downloaded data is returned as a raw vector, and if a raw connection, it will be placed into the connection. See the examples below.
#'
#' `multiupload_azure_file` and `multidownload_azure_file` are functions for uploading and downloading _multiple_ files at once. They parallelise file transfers by using the background process pool provided by AzureRMR, which can lead to significant efficiency gains when transferring many small files. There are two ways to specify the source and destination for these functions:
#' - Both `src` and `dest` can be vectors naming the individual source and destination pathnames.
#' - The `src` argument can be a wildcard pattern expanding to one or more files, with `dest` naming a destination directory. In this case, if `recursive` is true, the file transfer will replicate the source directory structure at the destination.
#'
#' `upload_azure_file` and `download_azure_file` can display a progress bar to track the file transfer. You can control whether to display this with `options(azure_storage_progress_bar=TRUE|FALSE)`; the default is TRUE.
#'
#' `azure_file_exists` and `azure_dir_exists` test for the existence of a file and directory, respectively.
#'
#' ## AzCopy
#' `upload_azure_file` and `download_azure_file` have the ability to use the AzCopy commandline utility to transfer files, instead of native R code. This can be useful if you want to take advantage of AzCopy's logging and recovery features; it may also be faster in the case of transferring a very large number of small files. To enable this, set the `use_azcopy` argument to TRUE.
#'
#' Note that AzCopy only supports SAS and AAD (OAuth) token as authentication methods. AzCopy also expects a single filename or wildcard spec as its source/destination argument, not a vector of filenames or a connection.
#'
#' @return
#' For `list_azure_files`, if `info="name"`, a vector of file/directory names. If `info="all"`, a data frame giving the file size and whether each object is a file or directory.
#'
#' For `download_azure_file`, if `dest=NULL`, the contents of the downloaded file as a raw vector.
#'
#' For `azure_file_exists`, either TRUE or FALSE.
#'
#' @seealso
#' [file_share], [az_storage], [storage_download], [call_azcopy]
#'
#' [AzCopy version 10 on GitHub](https://github.com/Azure/azure-storage-azcopy)
#'
#' @examples
#' \dontrun{
#'
#' share <- file_share("https://mystorage.file.core.windows.net/myshare", key="access_key")
#'
#' list_azure_files(share, "/")
#' list_azure_files(share, "/", recursive=TRUE)
#'
#' create_azure_dir(share, "/newdir")
#'
#' upload_azure_file(share, "~/bigfile.zip", dest="/newdir/bigfile.zip")
#' download_azure_file(share, "/newdir/bigfile.zip", dest="~/bigfile_downloaded.zip")
#'
#' delete_azure_file(share, "/newdir/bigfile.zip")
#' delete_azure_dir(share, "/newdir")
#'
#' # uploading/downloading multiple files at once
#' multiupload_azure_file(share, "/data/logfiles/*.zip")
#' multidownload_azure_file(share, "/monthly/jan*.*", "/data/january")
#'
#' # you can also pass a vector of file/pathnames as the source and destination
#' src <- c("file1.csv", "file2.csv", "file3.csv")
#' dest <- paste0("uploaded_", src)
#' multiupload_azure_file(share, src, dest)
#'
#' # uploading serialized R objects via connections
#' json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE)
#' con <- textConnection(json)
#' upload_azure_file(share, con, "iris.json")
#'
#' rds <- serialize(iris, NULL)
#' con <- rawConnection(rds)
#' upload_azure_file(share, con, "iris.rds")
#'
#' # downloading files into memory: as a raw vector, and via a connection
#' rawvec <- download_azure_file(share, "iris.json", NULL)
#' rawToChar(rawvec)
#'
#' con <- rawConnection(raw(0), "r+")
#' download_azure_file(share, "iris.rds", con)
#' unserialize(con)
#'
#' }
#' @rdname file
#' @export
list_azure_files <- function(share, dir="/", info=c("all", "name"),
prefix=NULL, recursive=FALSE)
{
info <- match.arg(info)
opts <- list(comp="list", restype="directory")
if(!is_empty(prefix))
opts <- c(opts, prefix=as.character(prefix))
out <- NULL
repeat
{
lst <- do_container_op(share, dir, options=opts)
out <- c(out, lst$Entries)
if(is_empty(lst$NextMarker))
break
else opts$marker <- lst$NextMarker[[1]]
}
name <- vapply(out, function(ent) ent$Name[[1]], FUN.VALUE=character(1))
isdir <- if(is_empty(name)) logical(0) else names(name) == "Directory"
size <- vapply(out,
function(ent) if(is_empty(ent$Properties)) NA_character_
else ent$Properties$`Content-Length`[[1]],
FUN.VALUE=character(1))
df <- data.frame(name=name, size=as.numeric(size), isdir=isdir, stringsAsFactors=FALSE, row.names=NULL)
df$name <- sub("^//", "", file.path(dir, df$name))
if(recursive)
{
dirs <- df$name[df$isdir]
nextlevel <- lapply(dirs, function(d) list_azure_files(share, d, info="all", prefix=prefix, recursive=TRUE))
df <- do.call(vctrs::vec_rbind, c(list(df), nextlevel))
}
if(info == "name")
df$name
else df
}
#' @rdname file
#' @export
upload_azure_file <- function(share, src, dest=basename(src), create_dir=FALSE, blocksize=2^22, put_md5=FALSE,
use_azcopy=FALSE)
{
if(use_azcopy)
azcopy_upload(share, src, dest, blocksize=blocksize, put_md5=put_md5)
else upload_azure_file_internal(share, src, dest, create_dir=create_dir, blocksize=blocksize, put_md5=put_md5)
}
#' @rdname file
#' @export
multiupload_azure_file <- function(share, src, dest, recursive=FALSE, create_dir=recursive, blocksize=2^22,
put_md5=FALSE, use_azcopy=FALSE,
max_concurrent_transfers=10)
{
if(use_azcopy)
return(azcopy_upload(share, src, dest, blocksize=blocksize, recursive=recursive, put_md5=put_md5))
multiupload_internal(share, src, dest, recursive=recursive, create_dir=create_dir, blocksize=blocksize,
put_md5=put_md5, max_concurrent_transfers=max_concurrent_transfers)
}
#' @rdname file
#' @export
download_azure_file <- function(share, src, dest=basename(src), blocksize=2^22, overwrite=FALSE,
check_md5=FALSE, use_azcopy=FALSE)
{
if(use_azcopy)
azcopy_download(share, src, dest, overwrite=overwrite, check_md5=check_md5)
else download_azure_file_internal(share, src, dest, blocksize=blocksize, overwrite=overwrite,
check_md5=check_md5)
}
#' @rdname file
#' @export
multidownload_azure_file <- function(share, src, dest, recursive=FALSE, blocksize=2^22, overwrite=FALSE,
check_md5=FALSE, use_azcopy=FALSE,
max_concurrent_transfers=10)
{
if(use_azcopy)
return(azcopy_download(share, src, dest, overwrite=overwrite, recursive=recursive, check_md5=check_md5))
multidownload_internal(share, src, dest, recursive=recursive, blocksize=blocksize, overwrite=overwrite,
check_md5=check_md5, max_concurrent_transfers=max_concurrent_transfers)
}
#' @rdname file
#' @export
delete_azure_file <- function(share, file, confirm=TRUE)
{
if(!delete_confirmed(confirm, paste0(share$endpoint$url, share$name, "/", file), "file"))
return(invisible(NULL))
invisible(do_container_op(share, file, http_verb="DELETE"))
}
#' @rdname file
#' @export
create_azure_dir <- function(share, dir, recursive=FALSE)
{
if(dir %in% c("/", "."))
return(invisible(NULL))
if(recursive)
try(create_azure_dir(share, dirname(dir), recursive=TRUE), silent=TRUE)
invisible(do_container_op(share, dir, options=list(restype="directory"),
headers=dir_default_perms, http_verb="PUT"))
}
#' @rdname file
#' @export
delete_azure_dir <- function(share, dir, recursive=FALSE, confirm=TRUE)
{
if(dir %in% c("/", ".") && !recursive)
return(invisible(NULL))
if(!delete_confirmed(confirm, paste0(share$endpoint$url, share$name, "/", dir), "directory"))
return(invisible(NULL))
if(recursive)
{
conts <- list_azure_files(share, dir, recursive=TRUE)
for(i in rev(seq_len(nrow(conts))))
{
# delete all files and dirs
# assumption is that files will be listed after their parent dir
if(conts$isdir[i])
delete_azure_dir(share, conts$name[i], recursive=FALSE, confirm=FALSE)
else delete_azure_file(share, conts$name[i], confirm=FALSE)
}
}
if(dir == "/")
return(invisible(NULL))
invisible(do_container_op(share, dir, options=list(restype="directory"), http_verb="DELETE"))
}
#' @rdname file
#' @export
azure_file_exists <- function(share, file)
{
res <- do_container_op(share, file, headers = list(), http_verb = "HEAD", http_status_handler = "pass")
if (httr::status_code(res) == 404L)
return(FALSE)
httr::stop_for_status(res, storage_error_message(res))
return(TRUE)
}
#' @rdname file
#' @export
azure_dir_exists <- function(share, dir)
{
if(dir == "/")
return(TRUE)
# attempt to get directory properties
options <- list(restype="directory")
res <- do_container_op(share, dir, options=options, http_verb="HEAD", http_status_handler="pass")
httr::status_code(res) < 300
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/file_client_funcs.R |
dir_default_perms <- list(
`x-ms-file-permission`="inherit",
`x-ms-file-creation-time`="now",
`x-ms-file-last-write-time`="now",
`x-ms-file-attributes`="None"
)
file_default_perms <- list(
`x-ms-file-permission`="inherit",
`x-ms-file-creation-time`="now",
`x-ms-file-last-write-time`="now",
`x-ms-file-attributes`="None"
)
file_update_perms <- list(
`x-ms-file-permission`="inherit",
`x-ms-file-creation-time`="preserve",
`x-ms-file-last-write-time`="preserve",
`x-ms-file-attributes`="None"
)
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/file_permissions.R |
upload_azure_file_internal <- function(share, src, dest, create_dir=FALSE, blocksize=2^22, put_md5=FALSE)
{
src <- normalize_src(src, put_md5)
on.exit(close(src$con))
# file API needs separate call(s) to create destination dir
if(create_dir)
try(create_azure_dir(share, dirname(dest), recursive=TRUE), silent=TRUE)
# first, create the file
# ensure content-length is never exponential notation
headers <- list("x-ms-type"="file",
"x-ms-content-type"=src$content_type,
"x-ms-content-length"=sprintf("%.0f", src$size))
if(!is.null(src$md5))
headers <- c(headers, "x-ms-content-md5"=src$md5)
headers <- c(headers, file_default_perms)
do_container_op(share, dest, headers=headers, http_verb="PUT")
# then write the bytes into it, one block at a time
options <- list(comp="range")
headers <- list("x-ms-write"="Update")
bar <- storage_progress_bar$new(src$size, "up")
# upload each block
blocklist <- list()
range_begin <- 0
while(range_begin < src$size)
{
body <- readBin(src$con, "raw", blocksize)
thisblock <- length(body)
if(thisblock == 0) # sanity check
break
# ensure content-length and range are never exponential notation
headers[["content-length"]] <- sprintf("%.0f", thisblock)
headers[["range"]] <- sprintf("bytes=%.0f-%.0f", range_begin, range_begin + thisblock - 1)
headers[["content-md5"]] <- encode_md5(body)
do_container_op(share, dest, headers=headers, body=body, options=options, progress=bar$update(),
http_verb="PUT")
bar$offset <- bar$offset + blocksize
range_begin <- range_begin + thisblock
}
bar$close()
invisible(NULL)
}
download_azure_file_internal <- function(share, src, dest, blocksize=2^22, overwrite=FALSE, check_md5=FALSE)
{
headers <- list()
dest <- init_download_dest(dest, overwrite)
on.exit(dispose_download_dest(dest))
# get file size (for progress bar) and MD5 hash
props <- get_storage_properties(share, src)
size <- as.numeric(props[["content-length"]])
src_md5 <- props[["content-md5"]]
bar <- storage_progress_bar$new(size, "down")
offset <- 0
while(offset < size)
{
headers$Range <- sprintf("bytes=%.0f-%.0f", offset, offset + blocksize - 1)
res <- do_container_op(share, src, headers=headers, progress=bar$update(), http_status_handler="pass")
httr::stop_for_status(res, storage_error_message(res))
writeBin(httr::content(res, as="raw"), dest)
offset <- offset + blocksize
bar$offset <- offset
}
bar$close()
if(check_md5)
do_md5_check(dest, src_md5)
if(inherits(dest, "null_dest")) rawConnectionValue(dest) else invisible(NULL)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/file_transfer_internal.R |
# custom progress bar with externally computed start and end values
# necessary to handle chunked transfers properly
storage_progress_bar <- R6::R6Class("storage_progress_bar",
public=list(
display=NULL,
bar=NULL,
direction=NULL,
offset=NULL,
initialize=function(size, direction)
{
self$display <- isTRUE(getOption("azure_storage_progress_bar")) && interactive()
if(self$display && size > 0)
{
self$direction <- direction
self$offset <- 0
self$bar <- utils::txtProgressBar(min=0, max=size, style=3)
}
},
update=function()
{
if(!self$display || is.null(self$bar)) return(NULL)
func <- function(down, up)
{
now <- self$offset + if(self$direction == "down") down[[2]] else up[[2]]
utils::setTxtProgressBar(self$bar, now)
TRUE
}
# hack b/c request function is not exported by httr
req <- list(method=NULL, url=NULL, headers=NULL, fields=NULL,
options=list(noprogress=FALSE, progressfunction=func))
structure(req, class="request")
},
close=function()
{
if(self$display && !is.null(self$bar)) close(self$bar)
}
))
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/progress_bar.R |
#' Generate shared access signatures
#'
#' The simplest way for a user to access files and data in a storage account is to give them the account's access key. This gives them full control of the account, and so may be a security risk. An alternative is to provide the user with a _shared access signature_ (SAS), which limits access to specific resources and only for a set length of time. There are three kinds of SAS: account, service and user delegation.
#'
#' Listed here are S3 generics and methods to obtain a SAS for accessing storage; in addition, the [`az_storage`] resource class has R6 methods for `get_account_sas`, `get_service_sas`, `get_user_delegation_key` and `revoke_user_delegation_keys` which simply call the corresponding S3 method.
#'
#' Note that you don't need to worry about these methods if you have been _given_ a SAS, and only want to use it to access a storage account.
#'
#' @param account An object representing a storage account. Depending on the generic, this can be one of the following: an Azure resource object (of class `az_storage`); a client storage endpoint (of class `storage_endpoint`); a _blob_ storage endpoint (of class `blob_endpoint`); or a string with the name of the account.
#' @param key For `get_account_sas`, the _account_ key, which controls full access to the storage account. For `get_user_delegation_sas`, a _user delegation_ key, as obtained from `get_user_delegation_key`.
#' @param token For `get_user_delegation_key`, an AAD token from which to obtain user details. The token must have `https://storage.azure.com` as its audience.
#' @param resource For `get_user_delegation_sas` and `get_service_sas`, the resource for which the SAS is valid. Both types of SAS allow this to be either a blob container, a directory or an individual blob; the resource should be specified in the form `containername[/dirname[/blobname]]`. A service SAS can also be used with file shares and files, in which case the resource should be of the form `sharename[/path-to-filename]`.
#' @param start,expiry The start and end dates for the account or user delegation SAS. These should be `Date` or `POSIXct` values, or strings coercible to such. If not supplied, the default is to generate start and expiry values for a period of 8 hours, starting from 15 minutes before the current time.
#' @param key_start,key_expiry For `get_user_delegation_key`, the start and end dates for the user delegation key.
#' @param services For `get_account_sas`, the storage service(s) for which the SAS is valid. Defaults to `bqtf`, meaning blob (including ADLS2), queue, table and file storage.
#' @param permissions The permissions that the SAS grants. The default value of `rl` (read and list) essentially means read-only access.
#' @param resource_types For an account SAS, the resource types for which the SAS is valid. For `get_account_sas` the default is `sco` meaning service, container and object. For `get_user_delegation_sas` the default is `c` meaning container-level access (including blobs within the container). Other possible values include "b" (a single blob) or "d" (a directory).
#' @param resource_type For a service or user delegation SAS, the type of resource for which the SAS is valid. For blob storage, the default value is "b" meaning a single blob. For file storage, the default value is "f" meaning a single file. Other possible values include "bs" (a blob snapshot), "c" (a blob container), "d" (a directory in a blob container), or "s" (a file share). Note however that a user delegation SAS only supports blob storage.
#' @param directory_depth For a service or user delegation SAS, the depth of the directory resource, starting at 0 for the root. This is required if `resource_type="d"` and the account has a hierarchical namespace enabled.
#' @param ip The IP address(es) or IP address range(s) for which the SAS is valid. The default is not to restrict access by IP.
#' @param protocol The protocol required to use the SAS. Possible values are `https` meaning HTTPS-only, or `https,http` meaning HTTP is also allowed. Note that the storage account itself may require HTTPS, regardless of what the SAS allows.
#' @param snapshot_time For a user delegation or service SAS, the blob snapshot for which the SAS is valid. Only required if `resource_type[s]="bs"`.
#' @param auth_api_version The storage API version to use for authenticating.
#' @param ... Arguments passed to lower-level functions.
#'
#' @details
#' An **account SAS** is secured with the storage account key. An account SAS delegates access to resources in one or more of the storage services. All of the operations available via a user delegation SAS are also available via an account SAS. You can also delegate access to read, write, and delete operations on blob containers, tables, queues, and file shares. To obtain an account SAS, call `get_account_sas`.
#'
#' A **service SAS** is like an account SAS, but allows finer-grained control of access. You can create a service SAS that allows access only to specific blobs in a container, or files in a file share. To obtain a service SAS, call `get_service_sas`.
#'
#' A **user delegation SAS** is a SAS secured with Azure AD credentials. It's recommended that you use Azure AD credentials when possible as a security best practice, rather than using the account key, which can be more easily compromised. When your application design requires shared access signatures, use Azure AD credentials to create a user delegation SAS for superior security.
#'
#' Every SAS is signed with a key. To create a user delegation SAS, you must first request a **user delegation key**, which is then used to sign the SAS. The user delegation key is analogous to the account key used to sign a service SAS or an account SAS, except that it relies on your Azure AD credentials. To request the user delegation key, call `get_user_delegation_key`. With the user delegation key, you can then create the SAS with `get_user_delegation_sas`.
#'
#' To invalidate all user delegation keys, as well as the SAS's generated with them, call `revoke_user_delegation_keys`.
#'
#' See the examples and Microsoft Docs pages below for how to specify arguments like the services, permissions, and resource types. Also, while not explicitly mentioned in the documentation, ADLSgen2 storage can use any SAS that is valid for blob storage.
#' @seealso
#' [blob_endpoint], [file_endpoint],
#' [Date], [POSIXt]
#'
#' [Azure Storage Provider API reference](https://docs.microsoft.com/en-us/rest/api/storagerp/),
#' [Azure Storage Services API reference](https://docs.microsoft.com/en-us/rest/api/storageservices/)
#'
#' [Create an account SAS](https://docs.microsoft.com/en-us/rest/api/storageservices/create-account-sas),
#' [Create a user delegation SAS](https://docs.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas),
#' [Create a service SAS](https://docs.microsoft.com/en-us/rest/api/storageservices/create-service-sas)
#'
#' @examples
#' # account SAS valid for 7 days
#' get_account_sas("mystorage", "access_key", start=Sys.Date(), expiry=Sys.Date() + 7)
#'
#' # SAS with read/write/create/delete permissions
#' get_account_sas("mystorage", "access_key", permissions="rwcd")
#'
#' # SAS limited to blob (+ADLS2) and file storage
#' get_account_sas("mystorage", "access_key", services="bf")
#'
#' # SAS for file storage, allows access to files only (not shares)
#' get_account_sas("mystorage", "access_key", services="f", resource_types="o")
#'
#' # getting the key from an endpoint object
#' endp <- storage_endpoint("https://mystorage.blob.core.windows.net", key="access_key")
#' get_account_sas(endp, permissions="rwcd")
#'
#' # service SAS for a container
#' get_service_sas(endp, "containername")
#'
#' # service SAS for a directory
#' get_service_sas(endp, "containername/dirname")
#'
#' # read/write service SAS for a blob
#' get_service_sas(endp, "containername/blobname", permissions="rw")
#'
#' \dontrun{
#'
#' # user delegation key valid for 24 hours
#' token <- AzureRMR::get_azure_token("https://storage.azure.com", "mytenant", "app_id")
#' endp <- storage_endpoint("https://mystorage.blob.core.windows.net", token=token)
#' userkey <- get_user_delegation_key(endp, start=Sys.Date(), expiry=Sys.Date() + 1)
#'
#' # user delegation SAS for a container
#' get_user_delegation_sas(endp, userkey, resource="mycontainer")
#'
#' # user delegation SAS for a specific file, read/write/create/delete access
#' # (order of permissions is important!)
#' get_user_delegation_sas(endp, userkey, resource="mycontainer/myfile",
#' resource_types="b", permissions="rcwd")
#'
#' }
#' @aliases sas shared-access-signature shared_access_signature
#' @rdname sas
#' @export
get_account_sas <- function(account, ...)
{
UseMethod("get_account_sas")
}
#' @rdname sas
#' @export
get_account_sas.az_storage <- function(account, key=account$list_keys()[1], ...)
{
get_account_sas(account$name, key, ...)
}
#' @rdname sas
#' @export
get_account_sas.storage_endpoint <- function(account, key=account$key, ...)
{
if(is.null(key))
stop("Must have access key to generate SAS", call.=FALSE)
acctname <- sub("\\..*", "", httr::parse_url(account$url)$hostname)
get_account_sas(acctname, key=key, ...)
}
#' @rdname sas
#' @export
get_account_sas.default <- function(account, key, start=NULL, expiry=NULL, services="bqtf", permissions="rl",
resource_types="sco", ip=NULL, protocol=NULL,
auth_api_version=getOption("azure_storage_api_version"), ...)
{
dates <- make_sas_dates(start, expiry)
sig_str <- paste(
account,
permissions,
services,
resource_types,
dates$start,
dates$expiry,
ip,
protocol,
auth_api_version,
"", # to ensure string always ends with newline
sep="\n"
)
sig <- sign_sha256(sig_str, key)
parts <- list(
sv=auth_api_version,
ss=services,
srt=resource_types,
sp=permissions,
st=dates$start,
se=dates$expiry,
sip=ip,
spr=protocol,
sig=sig
)
parts <- parts[!sapply(parts, is_empty)]
parts <- sapply(parts, url_encode, reserved=TRUE)
paste(names(parts), parts, sep="=", collapse="&")
}
#' @rdname sas
#' @export
get_user_delegation_key <- function(account, ...)
{
UseMethod("get_user_delegation_key")
}
#' @rdname sas
#' @export
get_user_delegation_key.az_resource <- function(account, token=account$token, ...)
{
endp <- account$get_blob_endpoint(key=NULL, token=token)
get_user_delegation_key(endp, token=endp$token, ...)
}
#' @rdname sas
#' @export
get_user_delegation_key.blob_endpoint <- function(account, token=account$token, key_start=NULL, key_expiry=NULL, ...)
{
if(is.null(token))
stop("Must have AAD token to get user delegation key", call.=FALSE)
account$key <- account$sas <- NULL
account$token <- token
dates <- make_sas_dates(key_start, key_expiry)
body <- list(KeyInfo=list(
Start=list(dates$start),
Expiry=list(dates$expiry)
))
res <- call_storage_endpoint(account, "", options=list(restype="service", comp="userdelegationkey"),
body=render_xml(body), http_verb="POST")
res <- unlist(res, recursive=FALSE)
class(res) <- "user_delegation_key"
res
}
#' @rdname sas
#' @export
revoke_user_delegation_keys <- function(account)
{
UseMethod("revoke_user_delegation_keys")
}
#' @rdname sas
#' @export
revoke_user_delegation_keys.az_storage <- function(account)
{
account$do_operation("revokeUserDelegationKeys", http_verb="POST")
invisible(NULL)
}
#' @rdname sas
#' @export
get_user_delegation_sas <- function(account, ...)
{
UseMethod("get_user_delegation_sas")
}
#' @rdname sas
#' @export
get_user_delegation_sas.az_storage <- function(account, key, ...)
{
get_user_delegation_sas(account$name, key, ...)
}
#' @rdname sas
#' @export
get_user_delegation_sas.blob_endpoint <- function(account, key, ...)
{
acctname <- sub("\\..*", "", httr::parse_url(account$url)$hostname)
get_user_delegation_sas(acctname, key, ...)
}
#' @rdname sas
#' @export
get_user_delegation_sas.default <- function(account, key, resource, start=NULL, expiry=NULL, permissions="rl",
resource_type="c", ip=NULL, protocol=NULL, snapshot_time=NULL,
directory_depth=NULL,
auth_api_version=getOption("azure_storage_api_version"), ...)
{
stopifnot(inherits(key, "user_delegation_key"))
dates <- make_sas_dates(start, expiry)
resource <- file.path("/blob", account, resource)
sig_str <- paste(
permissions,
dates$start,
dates$expiry,
resource,
key$SignedOid,
key$SignedTid,
key$SignedStart,
key$SignedExpiry,
key$SignedService,
key$SignedVersion,
"",
"",
"",
ip,
protocol,
auth_api_version,
resource_type,
snapshot_time,
"",
"",
"",
"",
"",
sep="\n"
)
sig <- sign_sha256(sig_str, key$Value)
parts <- list(
sv=auth_api_version,
sr=resource_type,
st=dates$start,
se=dates$expiry,
sp=permissions,
sip=ip,
spr=protocol,
skoid=key$SignedOid,
sktid=key$SignedTid,
skt=key$SignedStart,
ske=key$SignedExpiry,
sks=key$SignedService,
skv=key$SignedVersion,
sdd=as.character(directory_depth),
sig=sig
)
parts <- parts[!sapply(parts, is_empty)]
parts <- sapply(parts, url_encode, reserved=TRUE)
paste(names(parts), parts, sep="=", collapse="&")
}
#' @rdname sas
#' @export
get_service_sas <- function(account, ...)
{
UseMethod("get_service_sas")
}
#' @rdname sas
#' @export
get_service_sas.az_storage <- function(account, resource, service=c("blob", "file"), key=account$list_keys()[1],
...)
{
service <- match.arg(service)
endp <- if(service == "blob")
account$get_blob_endpoint(key=key)
else account$get_file_endpoint(key=key)
get_service_sas(endp, resource=resource, key=key, ...)
}
#' @rdname sas
#' @export
get_service_sas.storage_endpoint <- function(account, resource, key=account$key, ...)
{
if(is.null(key))
stop("Must have access key to generate SAS", call.=FALSE)
service <- sub("_endpoint$", "", class(account)[1])
acctname <- sub("\\..*", "", httr::parse_url(account$url)$hostname)
get_service_sas(acctname, resource=resource, key=key, service=service, ...)
}
#' @param service For a service SAS, the storage service for which the SAS is valid: either "blob" or "file". Currently AzureStor does not support creating a service SAS for queue or table storage.
#' @param directory_depth For a service SAS, the depth of the directory, starting at 0 for the root. This is required if `resource_type="d"` and the account has a hierarchical namespace enabled.
#' @param policy For a service SAS, optionally the name of a stored access policy to correlate the SAS with. Revoking the policy will also invalidate the SAS.
#' @rdname sas
#' @export
get_service_sas.default <- function(account, resource, key, service, start=NULL, expiry=NULL, permissions="rl",
resource_type=NULL, ip=NULL, protocol=NULL, policy=NULL, snapshot_time=NULL,
directory_depth=NULL, auth_api_version=getOption("azure_storage_api_version"), ...)
{
if(!(service %in% c("blob", "file")))
stop("Generating a service SAS currently only supported for blob and file storage", call.=FALSE)
dates <- make_sas_dates(start, expiry)
if(is.null(resource_type))
resource_type <- if(service == "blob") "c" else "s"
resource <- file.path("", service, account, resource) # canonicalized resource starts with /
sig_str <- if(service == "blob")
paste(
permissions,
dates$start,
dates$expiry,
resource,
policy,
ip,
protocol,
auth_api_version,
resource_type,
snapshot_time,
"", # rscc, rscd, rsce, rscl, rsct not yet implemented
"",
"",
"",
"",
sep="\n"
)
else paste(
permissions,
dates$start,
dates$expiry,
resource,
policy,
ip,
protocol,
auth_api_version,
"", # rscc, rscd, rsce, rscl, rsct not yet implemented
"",
"",
"",
"",
sep="\n"
)
sig <- sign_sha256(sig_str, key)
parts <- list(
sv=auth_api_version,
sr=resource_type,
sp=permissions,
st=dates$start,
se=dates$expiry,
sip=ip,
spr=protocol,
si=policy,
sdd=as.character(directory_depth),
sig=sig
)
parts <- parts[!sapply(parts, is_empty)]
parts <- sapply(parts, url_encode, reserved=TRUE)
paste(names(parts), parts, sep="=", collapse="&")
}
make_sas_dates <- function(start=NULL, expiry=NULL)
{
if(is.null(start))
start <- Sys.time() - 15*60
else if(!inherits(start, "POSIXt"))
start <- as.POSIXct(start, origin="1970-01-01")
if(is.null(expiry)) # by default, 8 hours after start
expiry <- start + 8 * 60 * 60
else if(!inherits(expiry, "POSIXt"))
expiry <- as.POSIXct(expiry, origin="1970-01-01")
if(inherits(start, c("POSIXt", "Date")))
start <- strftime(start, "%Y-%m-%dT%H:%M:%SZ", tz="UTC")
if(inherits(expiry, c("POSIXt", "Date")))
expiry <- strftime(expiry, "%Y-%m-%dT%H:%M:%SZ", tz="UTC")
list(start=start, expiry=expiry)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/sas.R |
#' Signs a request to the storage REST endpoint with a shared key
#' @param endpoint An endpoint object.
#' @param ... Further arguments to pass to individual methods.
#' @details
#' This is a generic method to allow for variations in how the different storage services handle key authorisation. The default method works with blob, file and ADLSgen2 storage.
#' @return
#' A named list of request headers. One of these should be the `Authorization` header containing the request signature.
#' @export
sign_request <- function(endpoint, ...)
{
UseMethod("sign_request")
}
sign_request.default <- function(endpoint, verb, url, headers, api, ...)
{
host_url <- httr::parse_url(endpoint$url)
acct_name <- if(host_url$path == "") sub("\\..+$", "", host_url$hostname) else host_url$path
resource <- paste0("/", acct_name, "/", url$path) # don't use file.path because it strips trailing / on Windows
# sanity check
resource <- gsub("//", "/", resource)
if(is.null(headers$date) || is.null(headers$Date))
headers$date <- httr::http_date(Sys.time())
if(is.null(headers$`x-ms-version`))
headers$`x-ms-version` <- api
sig <- make_signature(endpoint$key, verb, acct_name, resource, url$query, headers)
modifyList(headers, list(Host=url$host, Authorization=sig))
}
make_signature <- function(key, verb, acct_name, resource, options, headers)
{
names(headers) <- tolower(names(headers))
ms_headers <- headers[grepl("^x-ms", names(headers))]
ms_headers <- ms_headers[order(names(ms_headers))]
ms_headers <- paste(names(ms_headers), ms_headers, sep=":", collapse="\n")
options <- options[!sapply(options, is.null)]
options <- paste(names(options), options, sep=":", collapse="\n")
sig <- paste(verb,
as.character(headers[["content-encoding"]]),
as.character(headers[["content-language"]]),
as.character(headers[["content-length"]]),
as.character(headers[["content-md5"]]),
as.character(headers[["content-type"]]),
as.character(headers[["date"]]),
as.character(headers[["if-modified-since"]]),
as.character(headers[["if-match"]]),
as.character(headers[["if-none-match"]]),
as.character(headers[["if-unmodified-since"]]),
as.character(headers[["range"]]),
ms_headers,
resource,
options, sep="\n")
sig <- sub("\n$", "", sig) # undocumented, found thanks to Tsuyoshi Matsuzaki's blog post
paste0("SharedKey ", acct_name, ":", sign_sha256(sig, key))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/sign_request.R |
#' Get/set user-defined metadata for a storage object
#'
#' @param object A blob container, file share or ADLS filesystem object.
#' @param blob,file Optionally the name of an individual blob, file or directory within a container.
#' @param isdir For the file share method, whether the `file` argument is a file or directory. If omitted, `get_storage_metadata` will auto-detect the type; however this can be slow, so supply this argument if possible.
#' @param snapshot,version For the blob method of `get_storage_metadata`, optional snapshot and version identifiers. These should be datetime strings, in the format "yyyy-mm-ddTHH:MM:SS.SSSSSSSZ". Ignored if `blob` is omitted.
#' @param ... For the metadata setters, name-value pairs to set as metadata for a blob or file.
#' @param keep_existing For the metadata setters, whether to retain existing metadata information.
#' @details
#' These methods let you get and set user-defined properties (metadata) for storage objects.
#' @return
#' `get_storage_metadata` returns a named list of metadata properties. If the `blob` or `file` argument is present, the properties will be for the blob/file specified. If this argument is omitted, the properties will be for the container itself.
#'
#' `set_storage_metadata` returns the same list after setting the object's metadata, invisibly.
#' @seealso
#' [blob_container], [file_share], [adls_filesystem]
#'
#' [get_storage_properties] for standard properties
#' @examples
#' \dontrun{
#'
#' fs <- storage_container("https://mystorage.dfs.core.windows.net/myshare", key="access_key")
#' create_storage_dir("newdir")
#' storage_upload(share, "iris.csv", "newdir/iris.csv")
#'
#' set_storage_metadata(fs, "newdir/iris.csv", name1="value1")
#' # will be list(name1="value1")
#' get_storage_metadata(fs, "newdir/iris.csv")
#'
#' set_storage_metadata(fs, "newdir/iris.csv", name2="value2")
#' # will be list(name1="value1", name2="value2")
#' get_storage_metadata(fs, "newdir/iris.csv")
#'
#' set_storage_metadata(fs, "newdir/iris.csv", name3="value3", keep_existing=FALSE)
#' # will be list(name3="value3")
#' get_storage_metadata(fs, "newdir/iris.csv")
#'
#' # deleting all metadata
#' set_storage_metadata(fs, "newdir/iris.csv", keep_existing=FALSE)
#'
#' }
#' @rdname metadata
#' @export
get_storage_metadata <- function(object, ...)
{
UseMethod("get_storage_metadata")
}
#' @rdname metadata
#' @export
get_storage_metadata.blob_container <- function(object, blob, snapshot=NULL, version=NULL, ...)
{
if(missing(blob))
{
options <- list(restype="container", comp="metadata")
blob <- ""
}
else
{
options <- list(comp="metadata")
if(!is.null(snapshot))
options$snapshot <- snapshot
if(!is.null(version))
options$versionid <- version
}
res <- do_container_op(object, blob, options=options, http_verb="HEAD")
get_classic_metadata_headers(res)
}
#' @rdname metadata
#' @export
get_storage_metadata.file_share <- function(object, file, isdir, ...)
{
if(missing(isdir))
{
res <- tryCatch(Recall(object, file, FALSE), error=function(e) e)
if(inherits(res, "error"))
res <- tryCatch(Recall(object, file, TRUE), error=function(e) e)
if(inherits(res, "error"))
stop(res)
return(res)
}
if(missing(file))
{
options <- list(restype="share", comp="metadata")
file <- ""
}
else if(isdir)
options <- list(restype="directory", comp="metadata")
else options <- list(comp="metadata")
res <- do_container_op(object, file, options=options, http_verb="HEAD")
get_classic_metadata_headers(res)
}
#' @rdname metadata
#' @export
get_storage_metadata.adls_filesystem <- function(object, file, ...)
{
res <- get_storage_properties(object, file)
get_adls_metadata_header(res)
}
#' @rdname metadata
#' @export
set_storage_metadata <- function(object, ...)
{
UseMethod("set_storage_metadata")
}
#' @rdname metadata
#' @export
set_storage_metadata.blob_container <- function(object, blob, ..., keep_existing=TRUE)
{
meta <- if(keep_existing)
modifyList(get_storage_metadata(object, blob), list(...))
else list(...)
if(missing(blob))
{
options <- list(restype="container", comp="metadata")
blob <- ""
}
else options <- list(comp="metadata")
do_container_op(object, blob, options=options, headers=set_classic_metadata_headers(meta),
http_verb="PUT")
invisible(meta)
}
#' @rdname metadata
#' @export
set_storage_metadata.file_share <- function(object, file, isdir, ..., keep_existing=TRUE)
{
if(missing(isdir))
{
res <- tryCatch(Recall(object, file, ..., keep_existing=keep_existing, isdir=TRUE), error=function(e) e)
if(inherits(res, "error"))
res <- tryCatch(Recall(object, file, ..., keep_existing=keep_existing, isdir=FALSE), error=function(e) e)
if(inherits(res, "error"))
stop(res)
return(res)
}
meta <- if(keep_existing)
modifyList(get_storage_metadata(object, file, isdir=isdir), list(...))
else list(...)
if(missing(file))
{
options <- list(restype="share", comp="metadata")
file <- ""
}
else if(isdir)
options <- list(restype="directory", comp="metadata")
else options <- list(comp="metadata")
do_container_op(object, file, options=options, headers=set_classic_metadata_headers(meta),
http_verb="PUT")
invisible(meta)
}
#' @rdname metadata
#' @export
set_storage_metadata.adls_filesystem <- function(object, file, ..., keep_existing=TRUE)
{
meta <- if(keep_existing)
modifyList(get_storage_metadata(object, file), list(...))
else list(...)
if(missing(file))
{
options <- list(resource="filesystem")
file <- ""
}
else options <- list(action="setProperties")
do_container_op(object, file, options=options, headers=set_adls_metadata_header(meta),
http_verb="PATCH")
invisible(meta)
}
get_classic_metadata_headers <- function(res)
{
res <- res[grepl("^x-ms-meta-", names(res))]
names(res) <- substr(names(res), 11, nchar(names(res)))
res
}
set_classic_metadata_headers <- function(metalist)
{
if(is_empty(metalist))
return(metalist)
if(is.null(names(metalist)) || any(names(metalist) == ""))
stop("All metadata values must be named")
names(metalist) <- paste0("x-ms-meta-", names(metalist))
metalist
}
get_adls_metadata_header <- function(res)
{
meta <- strsplit(res[["x-ms-properties"]], ",")[[1]]
pos <- regexpr("=", meta)
if(any(pos == -1))
stop("Error getting object metadata", call.=FALSE)
metanames <- substr(meta, 1, pos - 1)
metavals <- lapply(substr(meta, pos + 1, nchar(meta)),
function(x) rawToChar(openssl::base64_decode(x)))
names(metavals) <- metanames
metavals
}
set_adls_metadata_header <- function(metalist)
{
if(is_empty(metalist))
return(metalist)
if(is.null(names(metalist)) || any(names(metalist) == ""))
stop("All metadata values must be named")
metalist <- sapply(metalist, function(x) openssl::base64_encode(as.character(x)))
list(`x-ms-properties`=paste(names(metalist), metalist, sep="=", collapse=","))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/storage_metadata.R |
#' Get storage properties for an object
#'
#' @param object A blob container, file share, or ADLS filesystem object.
#' @param filesystem An ADLS filesystem.
#' @param blob,file Optionally the name of an individual blob, file or directory within a container.
#' @param isdir For the file share method, whether the `file` argument is a file or directory. If omitted, `get_storage_properties` will auto-detect the type; however this can be slow, so supply this argument if possible.
#' @param snapshot,version For the blob method of `get_storage_properties`, optional snapshot and version identifiers. These should be datetime strings, in the format "yyyy-mm-ddTHH:MM:SS.SSSSSSSZ". Ignored if `blob` is omitted.
#' @param ... For compatibility with the generic.
#' @return
#' `get_storage_properties` returns a list describing the object properties. If the `blob` or `file` argument is present for the container methods, the properties will be for the blob/file specified. If this argument is omitted, the properties will be for the container itself.
#'
#' `get_adls_file_acl` returns a string giving the ADLSgen2 ACL for the file.
#'
#' `get_adls_file_status` returns a list of ADLSgen2 system properties for the file.
#'
#' @seealso
#' [blob_container], [file_share], [adls_filesystem]
#'
#' [get_storage_metadata] for getting and setting _user-defined_ properties (metadata)
#'
#' [list_blob_snapshots] to obtain the snapshots for a blob
#' @examples
#' \dontrun{
#'
#' fs <- storage_container("https://mystorage.dfs.core.windows.net/myshare", key="access_key")
#' create_storage_dir("newdir")
#' storage_upload(share, "iris.csv", "newdir/iris.csv")
#'
#' get_storage_properties(fs)
#' get_storage_properties(fs, "newdir")
#' get_storage_properties(fs, "newdir/iris.csv")
#'
#' # these are ADLS only
#' get_adls_file_acl(fs, "newdir/iris.csv")
#' get_adls_file_status(fs, "newdir/iris.csv")
#'
#' }
#' @rdname properties
#' @export
get_storage_properties <- function(object, ...)
{
UseMethod("get_storage_properties")
}
#' @rdname properties
#' @export
get_storage_properties.blob_container <- function(object, blob, snapshot=NULL, version=NULL, ...)
{
# properties for container
if(missing(blob))
return(do_container_op(object, options=list(restype="container"), http_verb="HEAD"))
# properties for blob
opts <- list()
if(!is.null(snapshot))
opts$snapshot <- snapshot
if(!is.null(version))
opts$versionid <- version
do_container_op(object, blob, options=opts, http_verb="HEAD")
}
#' @rdname properties
#' @export
get_storage_properties.file_share <- function(object, file, isdir, ...)
{
# properties for container
if(missing(file))
return(do_container_op(object, options=list(restype="share"), http_verb="HEAD"))
# properties for file/directory
if(missing(isdir))
{
res <- tryCatch(Recall(object, file, FALSE), error=function(e) e)
if(inherits(res, "error"))
res <- tryCatch(Recall(object, file, TRUE), error=function(e) e)
if(inherits(res, "error"))
stop(res)
return(res)
}
options <- if(isdir) list(restype="directory") else list()
do_container_op(object, file, options=options, http_verb="HEAD")
}
#' @rdname properties
#' @export
get_storage_properties.adls_filesystem <- function(object, file, ...)
{
# properties for container
if(missing(file))
return(do_container_op(object, options=list(resource="filesystem"), http_verb="HEAD"))
# properties for file/directory
do_container_op(object, file, http_verb="HEAD")
}
#' @rdname properties
#' @export
get_adls_file_acl <- function(filesystem, file)
{
do_container_op(filesystem, file, options=list(action="getaccesscontrol"), http_verb="HEAD")[["x-ms-acl"]]
}
#' @rdname properties
#' @export
get_adls_file_status <- function(filesystem, file)
{
do_container_op(filesystem, file, options=list(action="getstatus"), http_verb="HEAD")
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/storage_properties.R |
#' Carry out operations on a storage account container or endpoint
#'
#' @param container,endpoint For `do_container_op`, a storage container object (inheriting from `storage_container`). For `call_storage_endpoint`, a storage endpoint object (inheriting from `storage_endpoint`).
#' @param operation The container operation to perform, which will form part of the URL path.
#' @param path The path component of the endpoint call.
#' @param options A named list giving the query parameters for the operation.
#' @param headers A named list giving any additional HTTP headers to send to the host. Note that AzureStor will handle authentication details, so you don't have to specify these here.
#' @param body The request body for a `PUT/POST/PATCH` call.
#' @param ... Any additional arguments to pass to `httr::VERB`.
#' @param http_verb The HTTP verb as a string, one of `GET`, `DELETE`, `PUT`, `POST`, `HEAD` or `PATCH`.
#' @param http_status_handler The R handler for the HTTP status code of the response. `"stop"`, `"warn"` or `"message"` will call the corresponding handlers in httr, while `"pass"` ignores the status code. The latter is primarily useful for debugging purposes.
#' @param timeout Optionally, the number of seconds to wait for a result. If the timeout interval elapses before the storage service has finished processing the operation, it returns an error. The default timeout is taken from the system option `azure_storage_timeout`; if this is `NULL` it means to use the service default.
#' @param progress Used by the file transfer functions, to display a progress bar.
#' @param return_headers Whether to return the (parsed) response headers, rather than the body. Ignored if `http_status_handler="pass"`.
#' @details
#' These functions form the low-level interface between R and the storage API. `do_container_op` constructs a path from the operation and the container name, and passes it and the other arguments to `call_storage_endpoint`.
#' @return
#' Based on the `http_status_handler` and `return_headers` arguments. If `http_status_handler` is `"pass"`, the entire response is returned without modification.
#'
#' If `http_status_handler` is one of `"stop"`, `"warn"` or `"message"`, the status code of the response is checked, and if an error is not thrown, the parsed headers or body of the response is returned. An exception is if the response was written to disk, as part of a file download; in this case, the return value is NULL.
#'
#' @seealso
#' [blob_endpoint], [file_endpoint], [adls_endpoint]
#'
#' [blob_container], [file_share], [adls_filesystem]
#'
#' [httr::GET], [httr::PUT], [httr::POST], [httr::PATCH], [httr::HEAD], [httr::DELETE]
#' @examples
#' \dontrun{
#'
#' # get the metadata for a blob
#' bl_endp <- blob_endpoint("storage_acct_url", key="key")
#' cont <- storage_container(bl_endp, "containername")
#' do_container_op(cont, "filename.txt", options=list(comp="metadata"), http_verb="HEAD")
#'
#' }
#' @rdname storage_call
#' @export
do_container_op <- function(container, operation="", options=list(), headers=list(), http_verb="GET", ...)
{
operation <- if(nchar(operation) > 0 && substr(operation, 1, 1) != "/")
paste0(container$name, "/", operation)
else paste0(container$name, operation)
call_storage_endpoint(container$endpoint, operation, options=options, headers=headers, http_verb=http_verb, ...)
}
#' @rdname storage_call
#' @export
call_storage_endpoint <- function(endpoint, path, options=list(), headers=list(), body=NULL, ...,
http_verb=c("GET", "DELETE", "PUT", "POST", "HEAD", "PATCH"),
http_status_handler=c("stop", "warn", "message", "pass"),
timeout=getOption("azure_storage_timeout"),
progress=NULL, return_headers=(http_verb == "HEAD"))
{
http_verb <- match.arg(http_verb)
url <- httr::parse_url(endpoint$url)
# fix doubled-up /'s which can result from file.path snafus etc
url$path <- gsub("/{2,}", "/", url_encode(paste0(url$path, "/", path)))
options$timeout <- timeout
if(!is_empty(options))
url$query <- options[order(names(options))] # must be sorted for access key signing
headers$`x-ms-version` <- endpoint$api_version
retries <- as.numeric(getOption("azure_storage_retries"))
r <- 0
repeat
{
r <- r + 1
# use key if provided, otherwise AAD token if provided, otherwise sas if provided, otherwise anonymous access
if(!is.null(endpoint$key))
headers <- sign_request(endpoint, http_verb, url, headers, endpoint$api_version)
else if(!is.null(endpoint$token))
headers$Authorization <- paste("Bearer", validate_token(endpoint$token))
else if(!is.null(endpoint$sas) && r == 1)
url <- add_sas(endpoint$sas, url)
# retry on curl errors, not on httr errors
response <- tryCatch(httr::VERB(http_verb, url, do.call(httr::add_headers, headers), body=body, progress, ...),
error=function(e) e)
if(retry_transfer(response) && r <= retries)
message("Connection error, retrying (", r, " of ", retries, ")")
else break
}
if(inherits(response, "error"))
stop(response)
process_storage_response(response, match.arg(http_status_handler), return_headers)
}
validate_token <- function(token)
{
if(inherits(token, "AzureToken") || inherits(token, "Token2.0"))
{
# if token has expired, renew it
if(!token$validate())
{
message("Access token has expired or is no longer valid; refreshing")
token$refresh()
}
return(token$credentials$access_token)
}
else
{
if(!is.character(token) || length(token) != 1)
stop("Token must be a string, or an object of class AzureRMR::AzureToken", call.=FALSE)
return(token)
}
}
add_sas <- function(sas, url)
{
full_url <- httr::build_url(url)
httr::parse_url(paste0(full_url, if(is.null(url$query)) "?" else "&", sub("^\\?", "", sas)))
}
process_storage_response <- function(response, handler, return_headers)
{
if(handler == "pass")
return(response)
handler <- get(paste0(handler, "_for_status"), getNamespace("httr"))
handler(response, storage_error_message(response))
if(return_headers)
return(unclass(httr::headers(response)))
# if file was written to disk, printing content(*) will read it back into memory!
if(inherits(response$content, "path"))
return(NULL)
# silence message about missing encoding
cont <- suppressMessages(httr::content(response, simplifyVector=TRUE))
if(is_empty(cont))
NULL
else if(inherits(cont, "xml_node"))
xml_to_list(cont)
else cont
}
storage_error_message <- function(response, for_httr=TRUE)
{
cont <- suppressMessages(httr::content(response))
msg <- if(inherits(cont, "xml_node"))
{
cont <- xml_to_list(cont)
paste0(unlist(cont), collapse="\n")
}
else if(is.character(cont))
cont
else if(is.list(cont) && is.character(cont$message))
cont$message
else if(is.list(cont) && is.list(cont$error) && is.character(cont$error$message))
cont$error$message
else if(is.list(cont) && is.list(cont$odata.error) && is.character(cont$odata.error$message$value))
cont$odata.error$message$value
else ""
if(for_httr)
paste0("complete Storage Services operation. Message:\n", sub("\\.$", "", msg))
else msg
}
parse_storage_url <- function(url)
{
url <- httr::parse_url(url)
endpoint <- paste0(url$scheme, "://", url$hostname, "/")
store <- sub("/.*$", "", url$path)
path <- if(url$path == store) "" else sub("^[^/]+/", "", url$path)
c(endpoint, store, path)
}
is_endpoint_url <- function(url, type)
{
# handle cases where type != uri string
if(type == "adls")
type <- "dfs"
else if(type == "web")
type <- "z26\\.web"
# endpoint URL must be of the form {scheme}://{acctname}.{type}.{etc}
type <- sprintf("^https?://[a-z0-9]+\\.%s\\.", type)
is_url(url) && grepl(type, url)
}
generate_endpoint_container <- function(url, key, token, sas, api_version)
{
stor_path <- parse_storage_url(url)
endpoint <- storage_endpoint(stor_path[1], key, token, sas, api_version)
name <- stor_path[2]
list(endpoint=endpoint, name=name)
}
xml_to_list <- function(x)
{
# work around breaking change in xml2 1.2
if(packageVersion("xml2") < package_version("1.2"))
xml2::as_list(x)
else (xml2::as_list(x))[[1]]
}
# check whether to retry a failed file transfer
# retry on:
# - curl error (except host not found)
# - http 400: MD5 mismatch
retry_transfer <- function(res)
{
UseMethod("retry_transfer")
}
retry_transfer.error <- function(res)
{
grepl("curl", deparse(res$call[[1]]), fixed=TRUE) &&
!grepl("Could not resolve host", res$message, fixed=TRUE)
}
retry_transfer.response <- function(res)
{
httr::status_code(res) == 400L &&
grepl("Md5Mismatch", rawToChar(httr::content(res, as="raw")), fixed=TRUE)
}
as_datetime <- function(x, format="%a, %d %b %Y %H:%M:%S", tz="GMT")
{
as.POSIXct(x, format=format, tz=tz)
}
delete_confirmed <- function(confirm, name, type)
{
if(!interactive() || !confirm)
return(TRUE)
msg <- sprintf("Are you sure you really want to delete the %s '%s'?", type, name)
ok <- if(getRversion() < numeric_version("3.5.0"))
{
msg <- paste(msg, "(yes/No/cancel) ")
yn <- readline(msg)
if(nchar(yn) == 0)
FALSE
else tolower(substr(yn, 1, 1)) == "y"
}
else utils::askYesNo(msg, FALSE)
isTRUE(ok)
}
render_xml <- function(lst)
{
xml <- xml2::as_xml_document(lst)
rc <- rawConnection(raw(0), "w")
on.exit(close(rc))
xml2::write_xml(xml, rc)
rawToChar(rawConnectionValue(rc))
}
sign_sha256 <- function(string, key)
{
openssl::base64_encode(openssl::sha256(charToRaw(string), openssl::base64_decode(key)))
}
url_encode <- function(string, reserved=FALSE)
{
URLencode(enc2utf8(string), reserved=reserved)
}
encode_md5 <- function(x, ...)
{
openssl::base64_encode(openssl::md5(x, ...))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/storage_utils.R |
#' Save and load R objects to/from a storage account
#'
#' @param object An R object to save to storage.
#' @param container An Azure storage container object.
#' @param file The name of a file in storage.
#' @param envir For `storage_save_rdata` and `storage_load_rdata`, the environment from which to get objects to save, or in which to restore objects, respectively.
#' @param ... Further arguments passed to `saveRDS`, `memDecompress`, `save` and `load` as appropriate.
#' @details
#' These are equivalents to `saveRDS`, `readRDS`, `save` and `load` for saving and loading R objects to a storage account. They allow datasets and objects to be easily transferred to and from an R session, without having to manually create and delete temporary files.
#'
#' @seealso
#' [storage_download], [download_blob], [download_azure_file], [download_adls_file], [save], [load], [saveRDS]
#' @examples
#' \dontrun{
#'
#' bl <- storage_endpoint("https://mystorage.blob.core.windows.net/", key="access_key")
#' cont <- storage_container(bl, "mycontainer")
#'
#' storage_save_rds(iris, cont, "iris.rds")
#' irisnew <- storage_load_rds(iris, "iris.rds")
#' identical(iris, irisnew) # TRUE
#'
#' storage_save_rdata(iris, mtcars, container=cont, file="dataframes.rdata")
#' storage_load_rdata(cont, "dataframes.rdata")
#'
#' }
#' @rdname storage_save
#' @export
storage_save_rds <- function(object, container, file, ...)
{
# save to a temporary file to avoid dealing with memCompress/memDecompress hassles
tmpsave <- tempfile(fileext=".rdata")
on.exit(unlink(tmpsave))
saveRDS(object, tmpsave, ...)
storage_upload(container, tmpsave, file)
}
#' @rdname storage_save
#' @export
storage_load_rds <- function(container, file, ...)
{
conn <- storage_download(container, file, NULL)
unserialize(memDecompress(conn, ...))
}
#' @rdname storage_save
#' @export
storage_save_rdata <- function(..., container, file, envir=parent.frame())
{
# save to a temporary file as saving to a connection disables compression
tmpsave <- tempfile(fileext=".rdata")
on.exit(unlink(tmpsave))
save(..., file=tmpsave, envir=envir)
storage_upload(container, tmpsave, file)
}
#' @rdname storage_save
#' @export
storage_load_rdata <- function(container, file, envir=parent.frame(), ...)
{
conn <- storage_download(container, file, NULL)
load(rawConnection(conn, open="rb"), envir=envir, ...)
}
#' Read and write a data frame to/from a storage account
#'
#' @param object A data frame to write to storage.
#' @param container An Azure storage container object.
#' @param file The name of a file in storage.
#' @param delim For `storage_write_delim` and `storage_read_delim`, the field delimiter. Defaults to `\t` (tab).
#' @param ... Optional arguments passed to the file reading/writing functions. See 'Details'.
#' @details
#' These functions let you read and write data frames to storage. `storage_read_delim` and `write_delim` are for reading and writing arbitrary delimited files. `storage_read_csv` and `write_csv` are for comma-delimited (CSV) files. `storage_read_csv2` and `write_csv2` are for files with the semicolon `;` as delimiter and comma `,` as the decimal point, as used in some European countries.
#'
#' If the readr package is installed, they call down to `read_delim`, `write_delim`, `read_csv2` and `write_csv2`. Otherwise, they use `read_delim` and `write.table`.
#' @seealso
#' [storage_download], [download_blob], [download_azure_file], [download_adls_file],
#' [write.table], [read.csv], [readr::write_delim], [readr::read_delim]
#' @examples
#' \dontrun{
#'
#' bl <- storage_endpoint("https://mystorage.blob.core.windows.net/", key="access_key")
#' cont <- storage_container(bl, "mycontainer")
#'
#' storage_write_csv(iris, cont, "iris.csv")
#' # if readr is not installed
#' irisnew <- storage_read_csv(cont, "iris.csv", stringsAsFactors=TRUE)
#' # if readr is installed
#' irisnew <- storage_read_csv(cont, "iris.csv", col_types="nnnnf")
#'
#' all(mapply(identical, iris, irisnew)) # TRUE
#'
#' }
#' @rdname storage_write
#' @export
storage_write_delim <- function(object, container, file, delim="\t", ...)
{
func <- if(requireNamespace("readr", quietly=TRUE))
storage_write_delim_readr
else storage_write_delim_base
func(object, container, file, delim=delim, ...)
}
storage_write_delim_readr <- function(object, container, file, delim="\t", ...)
{
conn <- rawConnection(raw(0), open="r+b")
readr::write_delim(object, conn, delim=delim, ...)
seek(conn, 0)
storage_upload(container, conn, file)
}
storage_write_delim_base <- function(object, container, file, delim="\t", ...)
{
conn <- rawConnection(raw(0), open="r+b")
utils::write.table(object, conn, sep=delim, ...)
seek(conn, 0)
storage_upload(container, conn, file)
}
#' @rdname storage_write
#' @export
storage_write_csv <- function(object, container, file, ...)
{
func <- if(requireNamespace("readr", quietly=TRUE))
storage_write_csv_readr
else storage_write_csv_base
func(object, container, file, ...)
}
storage_write_csv_readr <- function(object, container, file, ...)
{
storage_write_delim_readr(object, container, file, delim=",", ...)
}
storage_write_csv_base <- function(object, container, file, ...)
{
storage_write_delim_base(object, container, file, delim=",", ...)
}
#' @rdname storage_write
#' @export
storage_write_csv2 <- function(object, container, file, ...)
{
func <- if(requireNamespace("readr", quietly=TRUE))
storage_write_csv2_readr
else storage_write_csv2_base
func(object, container, file, ...)
}
storage_write_csv2_readr <- function(object, container, file, ...)
{
conn <- rawConnection(raw(0), open="r+b")
readr::write_csv2(object, conn, ...)
seek(conn, 0)
storage_upload(container, conn, file)
}
storage_write_csv2_base <- function(object, container, file, ...)
{
storage_write_delim_base(object, container, file, delim=";", dec=",", ...)
}
#' @rdname storage_write
#' @export
storage_read_delim <- function(container, file, delim="\t", ...)
{
func <- if(requireNamespace("readr", quietly=TRUE))
storage_read_delim_readr
else storage_read_delim_base
func(container, file, delim=delim, ...)
}
storage_read_delim_readr <- function(container, file, delim="\t", ...)
{
con <- rawConnection(raw(0), "r+")
on.exit(try(close(con), silent=TRUE))
storage_download(container, file, con)
readr::read_delim(con, delim=delim, ...)
}
storage_read_delim_base <- function(container, file, delim="\t", ...)
{
txt <- storage_download(container, file, NULL)
utils::read.delim(text=rawToChar(txt), sep=delim, ...)
}
#' @rdname storage_write
#' @export
storage_read_csv <- function(container, file, ...)
{
func <- if(requireNamespace("readr", quietly=TRUE))
storage_read_csv_readr
else storage_read_csv_base
func(container, file, ...)
}
storage_read_csv_readr <- function(container, file, ...)
{
storage_read_delim_readr(container, file, delim=",", ...)
}
storage_read_csv_base <- function(container, file, ...)
{
storage_read_delim_base(container, file, delim=",", ...)
}
#' @rdname storage_write
#' @export
storage_read_csv2 <- function(container, file, ...)
{
func <- if(requireNamespace("readr", quietly=TRUE))
storage_read_csv2_readr
else storage_read_csv2_base
func(container, file, ...)
}
storage_read_csv2_readr <- function(container, file, ...)
{
con <- rawConnection(raw(0), "r+")
on.exit(try(close(con), silent=TRUE))
storage_download(container, file, con)
readr::read_csv2(con, ...)
}
storage_read_csv2_base <- function(container, file, ...)
{
storage_read_delim_base(container, file, delim=";", dec=",", ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/transfer_format_utils.R |
#' Upload and download generics
#'
#' @param container A storage container object.
#' @param src,dest For `upload_to_url` and `download_from_url`, the source and destination files to transfer.
#' @param key,token,sas Authentication arguments: an access key, Azure Active Directory (AAD) token or a shared access signature (SAS). If multiple arguments are supplied, a key takes priority over a token, which takes priority over a SAS. For `upload_to_url` and `download_to_url`, you can also provide a SAS as part of the URL itself.
#' @param ... Further arguments to pass to lower-level functions.
#' @param overwrite For downloading, whether to overwrite any destination files that exist.
#'
#' @details
#' These functions allow you to transfer files to and from a storage account.
#'
#' `storage_upload`, `storage_download`, `storage_multiupload` and `storage_multidownload` take as first argument a storage container, either for blob storage, file storage, or ADLSgen2. They dispatch to the corresponding file transfer functions for the given storage type.
#'
#' `upload_to_url` and `download_to_url` allow you to transfer a file to or from Azure storage, given the URL of the source or destination. The storage details (endpoint, container name, and so on) are obtained from the URL.
#'
#' By default, the upload and download functions will display a progress bar while they are downloading. To turn this off, use `options(azure_storage_progress_bar=FALSE)`. To turn the progress bar back on, use `options(azure_storage_progress_bar=TRUE)`.
#'
#' @seealso
#' [storage_container], [blob_container], [file_share], [adls_filesystem]
#'
#' [download_blob], [download_azure_file], [download_adls_file], [call_azcopy]
#'
#' @examples
#' \dontrun{
#'
#' # download from blob storage
#' bl <- storage_endpoint("https://mystorage.blob.core.windows.net/", key="access_key")
#' cont <- storage_container(bl, "mycontainer")
#' storage_download(cont, "bigfile.zip", "~/bigfile.zip")
#'
#' # same download but directly from the URL
#' download_from_url("https://mystorage.blob.core.windows.net/mycontainer/bigfile.zip",
#' "~/bigfile.zip",
#' key="access_key")
#'
#' # upload to ADLSgen2
#' ad <- storage_endpoint("https://myadls.dfs.core.windows.net/", token=mytoken)
#' cont <- storage_container(ad, "myfilesystem")
#' create_storage_dir(cont, "newdir")
#' storage_upload(cont, "files.zip", "newdir/files.zip")
#'
#' # same upload but directly to the URL
#' upload_to_url("files.zip",
#' "https://myadls.dfs.core.windows.net/myfilesystem/newdir/files.zip",
#' token=mytoken)
#'
#' }
#' @rdname file_transfer
#' @export
storage_upload <- function(container, ...)
UseMethod("storage_upload")
#' @rdname file_transfer
#' @export
storage_upload.blob_container <- function(container, ...)
upload_blob(container, ...)
#' @rdname file_transfer
#' @export
storage_upload.file_share <- function(container, ...)
upload_azure_file(container, ...)
#' @rdname file_transfer
#' @export
storage_upload.adls_filesystem <- function(container, ...)
upload_adls_file(container, ...)
#' @rdname file_transfer
#' @export
storage_multiupload <- function(container, ...)
UseMethod("storage_multiupload")
#' @rdname file_transfer
#' @export
storage_multiupload.blob_container <- function(container, ...)
multiupload_blob(container, ...)
#' @rdname file_transfer
#' @export
storage_multiupload.file_share <- function(container, ...)
multiupload_azure_file(container, ...)
#' @rdname file_transfer
#' @export
storage_multiupload.adls_filesystem <- function(container, ...)
multiupload_adls_file(container, ...)
# download
#' @rdname file_transfer
#' @export
storage_download <- function(container, ...)
UseMethod("storage_download")
#' @rdname file_transfer
#' @export
storage_download.blob_container <- function(container, ...)
download_blob(container, ...)
#' @rdname file_transfer
#' @export
storage_download.file_share <- function(container, ...)
download_azure_file(container, ...)
#' @rdname file_transfer
#' @export
storage_download.adls_filesystem <- function(container, ...)
download_adls_file(container, ...)
#' @rdname file_transfer
#' @export
storage_multidownload <- function(container, ...)
UseMethod("storage_multidownload")
#' @rdname file_transfer
#' @export
storage_multidownload.blob_container <- function(container, ...)
multidownload_blob(container, ...)
#' @rdname file_transfer
#' @export
storage_multidownload.file_share <- function(container, ...)
multidownload_azure_file(container, ...)
#' @rdname file_transfer
#' @export
storage_multidownload.adls_filesystem <- function(container, ...)
multidownload_adls_file(container, ...)
#' @rdname file_transfer
#' @export
download_from_url <- function(src, dest, key=NULL, token=NULL, sas=NULL, ..., overwrite=FALSE)
{
az_path <- parse_storage_url(src)
if(is.null(sas))
sas <- find_sas(src)
endpoint <- storage_endpoint(az_path[1], key=key, token=token, sas=sas, ...)
cont <- storage_container(endpoint, az_path[2])
storage_download(cont, az_path[3], dest, overwrite=overwrite)
}
#' @rdname file_transfer
#' @export
upload_to_url <- function(src, dest, key=NULL, token=NULL, sas=NULL, ...)
{
az_path <- parse_storage_url(dest)
if(is.null(sas))
sas <- find_sas(dest)
endpoint <- storage_endpoint(az_path[1], key=key, token=token, sas=sas, ...)
cont <- storage_container(endpoint, az_path[2])
storage_upload(cont, src, az_path[3])
}
find_sas <- function(url)
{
querymark <- regexpr("\\?sv", url)
if(querymark == -1)
NULL
else substr(url, querymark + 1, nchar(url))
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/transfer_generics.R |
normalize_src <- function(src, put_md5=FALSE)
{
UseMethod("normalize_src")
}
normalize_src.character <- function(src, put_md5=FALSE)
{
content_type <- mime::guess_type(src)
con <- file(src, open="rb")
size <- file.size(src)
if(put_md5)
{
md5 <- encode_md5(con)
seek(con, 0)
}
else md5 <- NULL
list(content_type=content_type, con=con, size=size, md5=md5)
}
normalize_src.textConnection <- function(src, put_md5=FALSE)
{
# convert to raw connection
content_type <- "application/octet-stream"
src <- charToRaw(paste0(readLines(src), collapse="\n"))
size <- length(src)
md5 <- if(put_md5)
encode_md5(src)
else NULL
con <- rawConnection(src)
list(content_type=content_type, con=con, size=size, md5=md5)
}
normalize_src.rawConnection <- function(src, put_md5=FALSE)
{
content_type <- "application/octet-stream"
# need to read the data to get object size (!)
size <- 0
repeat
{
x <- readBin(src, "raw", n=1e6)
if(length(x) == 0)
break
size <- size + length(x)
}
seek(src, 0) # reposition connection after reading
if(put_md5)
{
md5 <- encode_md5(src)
seek(src, 0)
}
else md5 <- NULL
list(content_type=content_type, con=src, size=size, md5=md5)
}
multiupload_internal <- function(container, src, dest, recursive, ..., max_concurrent_transfers=10)
{
src <- make_upload_set(src, recursive)
wildcard_src <- !is.null(attr(src, "root"))
if(missing(dest))
{
dest <- if(wildcard_src) "/" else basename(src)
}
n_src <- length(src)
n_dest <- length(dest)
if(n_src == 0)
stop("No files to transfer", call.=FALSE)
if(wildcard_src && n_dest > 1)
stop("'dest' for a wildcard 'src' must be a single directory", call.=FALSE)
if(!wildcard_src && n_dest != n_src)
stop("'dest' must contain one name per file in 'src'", call.=FALSE)
if(n_src == 1 && !wildcard_src)
return(storage_upload(container, src, dest, ...))
if(wildcard_src)
dest <- sub("//", "/", file.path(dest, substr(src, nchar(attr(src, "root")) + 2, nchar(src))))
init_pool(max_concurrent_transfers)
pool_export("container", envir=environment())
pool_map(function(s, d, ...) AzureStor::storage_upload(container, s, d, ...),
src, dest, MoreArgs=list(...))
invisible(NULL)
}
multidownload_internal <- function(container, src, dest, recursive, ..., max_concurrent_transfers=10)
{
src <- make_download_set(container, src, recursive)
wildcard_src <- !is.null(attr(src, "root"))
if(missing(dest))
{
dest <- if(wildcard_src) "." else basename(src)
}
n_src <- length(src)
n_dest <- length(dest)
if(n_src == 0)
stop("No files to transfer", call.=FALSE)
if(wildcard_src && n_dest > 1)
stop("'dest' for a wildcard 'src' must be a single directory", call.=FALSE)
if(!wildcard_src && n_dest != n_src)
stop("'dest' must contain one name per file in 'src'", call.=FALSE)
if(n_src == 1 && !wildcard_src)
return(storage_download(container, src, dest, ...))
if(wildcard_src)
{
root <- attr(src, "root")
destnames <- if(root != "/")
substr(src, nchar(root) + 2, nchar(src))
else src
dest <- sub("//", "/", file.path(dest, destnames))
}
init_pool(max_concurrent_transfers)
pool_export("container", envir=environment())
pool_map(function(s, d, ...) AzureStor::storage_download(container, s, d, ...),
src, dest, MoreArgs=list(...))
invisible(NULL)
}
make_upload_set <- function(src, recursive)
{
if(length(src) == 1) # possible wildcard
{
src_dir <- dirname(src)
src_name <- basename(src)
src_spec <- glob2rx(src_name)
if(src_spec != paste0("^", gsub("\\.", "\\\\.", src_name), "$")) # a wildcard
{
fnames <- dir(src_dir, pattern=src_spec, full.names=TRUE, recursive=recursive,
ignore.case=(.Platform$OS.type == "windows"))
dnames <- list.dirs(dirname(src), full.names=TRUE, recursive=recursive)
src <- setdiff(fnames, dnames)
# store original src dir of the wildcard
attr(src, "root") <- src_dir
return(src)
}
}
src[file.exists(src)]
}
make_download_set <- function(container, src, recursive)
{
src <- sub("^/", "", src) # strip leading slash if present, not meaningful
if(length(src) == 1) # possible wildcard
{
src_dir <- dirname(src)
src_name <- basename(src)
src_spec <- glob2rx(src_name)
if(src_spec != paste0("^", gsub("\\.", "\\\\.", src_name), "$")) # a wildcard
{
if(src_dir == ".")
src_dir <- "/"
src <- list_storage_files(container, src_dir, recursive=recursive)
if(is.null(src$isdir)) # blob wart
src$isdir <- FALSE
src <- src$name[grepl(src_spec, basename(src$name)) & !(src$isdir)]
# store original src dir of the wildcard
attr(src, "root") <- src_dir
}
}
src
}
| /scratch/gouwar.j/cran-all/cranData/AzureStor/R/transfer_utils.R |
---
title: "How to enable AAD authentication for a storage account"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AAD authentication setup}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
It's possible to authenticate to a storage account using an OAuth token obtained via Azure Active Directory (AAD). This has several advantages:
- You don't need to pass around the storage account's access key, which is like a master password: it controls all access to the account. If it gets compromised, the account is no longer secure.
- You can use role-based access control to limit which users are allowed to use the account, and what actions they can perform.
- Unlike a shared access signature (SAS), AAD authentication doesn't have a hard expiry date. As long as an AAD identity (user, service principal, etc) has the correct permissions, it can always connect to the storage account. Similarly, you can easily revoke access by removing the necessary permissions from the identity.
Here, we'll take you through the steps involved to configure a storage account for AAD authentication. The assumption here is that you're an administrator for your AAD tenant, or have the appropriate rights to create AAD app registrations and set role assignments on resources---if you don't know what these terms mean, you probably don't have such rights!
## Authenticate as a user
Authenticating as a user is relatively straightforward: you can think of it as "logging into" the storage account with your username. This involves the following:
- Create an app registration; this essentially tells Azure that the AzureStor package is allowed to access storage in your tenant
- Give the app the "user_impersonation" delegated permission for storage
- Assign your users the appropriate roles in the storage account
### Create an app registration
You can create a new app registration using any of the usual methods. For example to create an app registration in the Azure Portal (`https://portal.azure.com/`), click on "Azure Active Directory" in the menu bar down the left, go to "App registrations" and click on "New registration". Name the app something suitable, eg "AzureStor R interface to storage".
- If you want your users to be able to login with the authorization code flow, you must add a **public client/native redirect URI** of `http://localhost:1410`. This is appropriate if your users will be running R on their local PCs, with an Internet browser available.
- If you want your users to be able to login with the device code flow, you must **enable the "Allow public client flows" setting** for your app. In the Portal, you can find this setting in the "Authentication" pane once the app registration is complete. This is appropriate if your users are running R in a remote session, for example in RStudio Server, Databricks, or a VM terminal window over ssh.
Once the app registration has been created, note the app ID.
### Set the app permissions
To enable users to authenticate to storage with this app, add the "user_impersonation" delegated permission for the Azure Storage API. In the Portal, you can set this by going to the "API permissions" pane for your app reigstration, then clicking on "Add a permission".
### Give users a role assignment in the storage account
Having registered an app ID for AzureStor, you then add the appropriate role assignments for your users. These role assignments are set for the resource, _not_ the app registration. In the Portal, you can set these by going to your storage account resource, then clicking on "Access Control (IAM)".
The main role assignments to be aware of are:
- **Storage blob data reader**: read (but not write) blob containers and blobs. Because blob storage and ADLS2 storage are interoperable, this role also lets users read ADLS2 filesystems and files.
- **Storage blob data contributor**: read and write blob/ADLS2 containers and files.
- **Storage blob data owner**: read and write blob/ADLS2 containers and files; in addition, allow setting POSIX ACLs for ADLS2.
- **Storage queue data reader**: read (but now write or delete) queues and queue messages.
- **Storage queue data contributor**: read, write and delete queues and queue messages.
- **Storage queue data message sender**: send (write) queue messages.
- **Storage queue data message processor**: read and delete queue messages.
Note that AzureStor does not provide an R interface to queue storage; for that, you can use the AzureQstor package.
### Authenticating
Once this is done, your users can authenticate to storage as follows. Here. `app_id` is the ID of the app registration you've just created.
```r
# obtaining a token from an R session on the local machine
token <- AzureAuth::get_azure_token("https://storage.azure.com", tenant="yourtenant", app="app_id")
# obtaining a token from a remote R session: RStudio Server/Databricks
token <- AzureAuth::get_azure_token("https://storage.azure.com", tenant="yourtenant", app="app_id",
auth_type="device_code")
# use the token to login to storage (blob in this case)
endp <- storage_endpoint("https://yourstorageacct.blob.core.windows.net", token=token)
```
## Authenticate as the application
In the previous section, we described how users can authenticate as themselves with AzureStor. Here, we'll describe how to authenticate as the _application_, that is, without a signed-in user. This is useful in a scenario such as a CI/CD or deployment pipeline that needs to run without user intervention.
The process is as follows:
- Create an app registration as before
- Give the app a client secret
- Assign the app's service principal the appropriate role in the storage account
### Create the app registration and give it a client secret
Creating the app registration is much the same as before, except that you don't need to set a redirect URI or enable public client flows. Instead you give the app a **client secret**, which is much the same as a password (and should similarly be kept secure). In the Portal, you can set this in the "Certificates and Secrets" pane for your app registration.
It's also possible to authenticate with a **client certificate (public key)**, but this is more complex and we won't go into it here. For more details, see the [Azure Active Directory documentation](https://docs.microsoft.com/en-au/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow) and the [AzureAuth intro vignette](https://cran.r-project.org/package=AzureAuth/vignettes/token.html).
### Give the app's service principal a role assignment in the storage account
This is again similar to assigning a user a role, except now you assign it to the service principal for your app. The same roles assignments as before can be used.
### Authenticating
To authenticate as the app, use the following code:
```r
# use the app ID and client secret you noted before
token <- AzureAuth::get_azure_token("https://storage.azure.com", tenant="yourtenant", app="app_id",
password="client_secret")
endp <- storage_endpoint("https://yourstorageacct.blob.core.windows.net", token=token)
```
| /scratch/gouwar.j/cran-all/cranData/AzureStor/inst/doc/aad.rmd |
---
title: "AzureStor 2.0 client generics and methods"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AzureStor generics}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
AzureStor 1.0 defined several functions for working with storage, which were specific to each storage type: blob, file or ADLSgen2. AzureStor 2.0 organises these functions into a consistent framework, using S3 generics and methods.
The client framework for AzureStor 2.0 is described here. While the original interface is still supported, it's recommended that you use the new framework as it protects against specifying the wrong function for a given storage type.
## Storage endpoints and containers
The `storage_endpoint` creates a storage endpoint object based on the URL specified. The `blob_endpoint`, `file_endpoint` and `adls_endpoint` functions do the same thing, but require that the URL and the function match.
```r
# generic endpoint function: storage type inferred from URL
storage_endpoint("https://mystorage.blob.core.windows.net/", ...) # blob endpoint
storage_endpoint("https://mystorage.file.core.windows.net/", ...) # file endpoint
storage_endpoint("https://mystorage.dfs.core.windows.net/", ...) # ADLSgen2 endpoint
# storage-type-specific functions
blob_endpoint("https://mystorage.blob.core.windows.net/", ...) # blob endpoint
file_endpoint("https://mystorage.file.core.windows.net/", ...) # file endpoint
adls_endpoint("https://mystorage.dfs.core.windows.net/", ...) # ADLSgen2 endpoint
# error: using the wrong function for a given storage type
# this is not possible with the new framework
file_endpoint("https://mystorage.blob.core.windows.net/")
```
The following generics are for managing storage containers, given a storage endpoint of a given type (blob, file or ADLSgen2):
- `storage_container`: get a storage container
- `create_storage_container`
- `delete_storage_container`
- `list_storage_containers`
In turn these dispatch to the following lower-level functions for each type of storage:
| Generic | Blob | File | ADLS2 |
| ------- | ---- | ---- | ----- |
| `storage_container` | `blob_container` | `file_share` | `adls_filesystem` |
| `create_storage_container` | `create_blob_container` | `create_file_share` | `create_adls_filesystem` |
| `delete_storage_container` | `delete_blob_container` | `delete_file_share` | `delete_adls_filesystem` |
| `list_storage_containers` | `list_blob_containers` | `list_file_shares` | `list_adls_filesystems` |
```r
# example of working with containers (blob storage)
bl_endp_key <- storage_endpoint("https://mystorage.blob.core.windows.net/", key="mykey")
list_storage_containers(bl_endp_key)
cont <- storage_container(bl_endp, "mycontainer")
newcont <- create_storage_container(bl_endp, "newcontainer")
delete_storage_container(newcont)
# you can also call the lower-level functions directly if desired
bl_endp_key <- blob_endpoint("https://mystorage.blob.core.windows.net/", key="mykey")
list_blob_containers(bl_endp_key)
cont <- blob_container(bl_endp, "mycontainer")
newcont <- create_blob_container(bl_endp, "newcontainer")
delete_blob_container(newcont)
# error: using the wrong function for a given storage type
# this is not possible with the new framework
list_file_shares(bl_endp_key)
```
## Files and blobs
The following generics are for working with objects within a storage container:
- `list_storage_files`: list files/blobs in a directory (for ADLSgen2 and file storage) or blob container
- `create_storage_dir`: for ADLSgen2 and file storage, create a directory
- `delete_storage_dir`: for ADLSgen2 and file storage, delete a directory
- `delete_storage_file`: delete a file or blob
- `storage_upload`/`storage_download`: transfer a file to or from a storage container
- `storage_multiupload`/`storage_multidownload`: transfer multiple files in parallel to or from a storage container
As before, these dispatch to a family of lower-level functions for each type of storage:
| Generic | Blob | File | ADLS2 |
| ------- | ---- | ---- | ----- |
| `list_storage_files` | `list_blobs` | `list_azure_files` | `list_adls_files` |
| `create_storage_dir` | N/A | `create_azure_dir` | `create_adls_dir` |
| `delete_storage_dir` | N/A | `delete_azure_dir` | `delete_adls_dir` |
| `delete_storage_file` | `delete_blob` | `delete_azure_file` | `delete_adls_file` |
| `storage_upload` | `upload_blob` | `upload_azure_file` | `upload_adls_file` |
| `storage_download` | `download_blob` | `download_azure_file` | `download_adls_file` |
| `storage_multiupload` | `multiupload_blob` | `multiupload_azure_file` | `multiupload_adls_file` |
| `storage_multidownload` | `multidownload_blob` | `multidownload_azure_file` | `multidownload_adls_file` |
| `copy_url_to_storage` | `copy_url_to_blob` | N/A | N/A |
| `multicopy_url_to_storage` | `multicopy_url_to_blob` | N/A | N/A |
```r
# example workflow with generics (blob storage)
cont <- storage_container(bl_endp, "mycontainer")
list_storage_files(cont)
storage_upload(cont, "description.txt", "description")
storage_multiupload(cont, "*.tar.gz")
# using lower-level functions
cont <- blob_container(bl_endp, "mycontainer")
list_blobs(cont)
upload_blob(cont, "description.txt", "description")
multiupload_blob(cont, "*.tar.gz")
```
| /scratch/gouwar.j/cran-all/cranData/AzureStor/inst/doc/generics.Rmd |
---
title: "Introduction to AzureStor"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AzureStor}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
This is a short introduction on how to use AzureStor.
## Storage endpoints
The interface for accessing storage is similar across blobs, files and ADLSGen2. You call the `storage_endpoint` function and provide the endpoint URI, along with your authentication credentials. AzureStor will figure out the type of storage from the URI.
AzureStor supports all the different ways you can authenticate with a storage endpoint:
- Blob storage supports authenticating with an access key, shared access signature (SAS), or an Azure Active Directory (AAD) OAuth token;
- File storage supports access key and SAS;
- ADLSgen2 supports access key and AAD token.
In the case of an AAD token, you can also provide an object obtained via `AzureAuth::get_azure_token()`. If you do this, AzureStor can automatically refresh the token for you when it expires.
```r
# various endpoints for an account: blob, file, ADLS2
bl_endp_key <- storage_endpoint("https://mystorage.blob.core.windows.net", key="access_key")
fl_endp_sas <- storage_endpoint("https://mystorage.file.core.windows.net", sas="my_sas")
ad_endp_tok <- storage_endpoint("https://mystorage.dfs.core.windows.net", token="my_token")
# alternative (recommended) way of supplying an AAD token
token <- AzureRMR::get_azure_token("https://storage.azure.com",
tenant="myaadtenant", app="app_id", password="mypassword"))
ad_endp_tok2 <- storage_endpoint("https://mystorage.dfs.core.windows.net", token=token)
```
## Listing, creating and deleting containers
AzureStor provides a rich framework for managing storage. The following generics allow you to manage storage containers:
- `storage_container`: get a storage container (blob container, file share or ADLS filesystem)
- `create_storage_container`
- `delete_storage_container`
- `list_storage_containers`
```r
# example of working with containers (blob storage)
list_storage_containers(bl_endp_key)
cont <- storage_container(bl_endp_key, "mycontainer")
newcont <- create_storage_container(bl_endp_key, "newcontainer")
delete_storage_container(newcont)
```
## Files and blobs
These functions for working with objects within a storage container:
- `list_storage_files`: list files/blobs in a directory (for ADLSgen2 and file storage) or blob container
- `create_storage_dir`: for ADLSgen2 and file storage, create a directory
- `delete_storage_dir`: for ADLSgen2 and file storage, delete a directory
- `delete_storage_file`: delete a file or blob
- `storage_file_exists`: check that a file or blob exists
- `storage_upload`/`storage_download`: transfer a file to or from a storage container
- `storage_multiupload`/`storage_multidownload`: transfer multiple files in parallel to or from a storage container
```r
# example of working with files and directories (ADLSgen2)
cont <- storage_container(ad_end_tok, "myfilesystem")
list_storage_files(cont)
create_storage_dir(cont, "newdir")
storage_download(cont, "/readme.txt")
storage_multiupload(cont, "N:/data/*.*", "newdir") # uploading everything in a directory
```
## Uploading and downloading
AzureStor includes a number of extra features to make transferring files efficient.
### Parallel connections
The `storage_multiupload/download` functions transfer multiple files in parallel, which usually results in major speedups when transferring multiple small files. The pool is created the first time a parallel file transfer is performed, and persists for the duration of the R session; this means you don't have to wait for the pool to be (re-)created each time.
```r
# uploading/downloading multiple files at once: use a wildcard to specify files to transfer
storage_multiupload(cont, src="N:/logfiles/*.zip")
storage_multidownload(cont, src="/monthly/jan*.*", dest="~/data/january")
# or supply a vector of file specs as the source and destination
src <- c("file1.csv", "file2.csv", "file3.csv")
dest <- file.path("data/", src)
storage_multiupload(cont, src, dest)
```
### File format helpers
AzureStor includes convenience functions to transfer data in a number of commonly used formats: RDS, RData, TSV (tab-delimited), CSV, and CSV2 (semicolon-delimited). These work via connections and so don't create temporary files on disk.
```r
# save an R object to storage and read it back again
obj <- list(n=42L, x=pi, c="foo")
storage_save_rds(obj, cont, "obj.rds")
objnew <- storage_load_rds(cont, "obj.rds")
identical(obj, objnew) # TRUE
# reading/writing data to CSV format
storage_write_csv(mtcars, cont, "mtcars.csv")
mtnew <- storage_read_csv(cont, "mtcars.csv")
all(mapply(identical, mtcars, mtnew)) # TRUE
```
### Transfer to and from connections
You can upload a (single) in-memory R object via a connection, and similarly, you can download a file to a connection, or return it as a raw vector. This lets you transfer an object without having to create a temporary file as an intermediate step.
```r
# uploading serialized R objects via connections
json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE)
con <- textConnection(json)
storage_upload(cont, src=con, dest="iris.json")
rds <- serialize(iris, NULL)
con <- rawConnection(rds)
storage_upload(cont, src=con, dest="iris.rds")
# downloading files into memory: as a raw vector with dest=NULL, and via a connection
rawvec <- storage_download(cont, src="iris.json", dest=NULL)
rawToChar(rawvec)
con <- rawConnection(raw(0), "r+")
storage_download(cont, src="iris.rds", dest=con)
unserialize(con)
```
### Copy from URLs (blob storage only)
The `copy_url_to_storage` function lets you transfer the contents of a URL directly to storage, without having to download it to your local machine first. The `multicopy_url_to_storage` function does the same, but for a vector of URLs. Currently, these only work for blob storage.
```r
# copy from a public URL: Iris data from UCI machine learning repository
copy_url_to_storage(cont,
"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
"iris.csv")
# copying files from another storage account, by appending a SAS to the URL(s)
sas <- "?sv=...."
files <- paste0("https://srcstorage.blob.core.windows.net/container/file", 0:9, ".csv", sas)
multicopy_url_to_storage(cont, files)
```
### Appending (blob storage only)
AzureStor supports uploading to append blobs. An append blob is comprised of blocks and is optimized for append operations; it is well suited for data that is constantly growing, but should not be modified once written, such as server logs.
To upload to an append blob, specify `type="AppendBlob"` in the `storage_upload` call. To append data (rather than overwriting an existing blob), include the argument `append=TRUE`. See `?upload_blob` for more details.
```r
# create a new append blob
storage_upload(cont, src="logfile1.csv", dest="logfile.csv", type="AppendBlob")
# appending to an existing blob
storage_upload(cont, src="logfile2.csv", dest="logfile.csv", type="AppendBlob", append=TRUE)
```
### Interface to AzCopy
AzureStor includes an interface to [AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10), Microsoft's high-performance commandline utility for copying files to and from storage. To take advantage of this, simply include the argument `use_azcopy=TRUE` on any upload or download function. AzureStor will then call AzCopy to perform the file transfer, rather than using its own internal code. In addition, a `call_azcopy` function is provided to let you use AzCopy for any task.
```r
# use azcopy to download
myfs <- storage_container(ad_endp, "myfilesystem")
storage_download(myfs, "/incoming/bigfile.tar.gz", "/data", use_azcopy=TRUE)
# use azcopy to sync a local and remote dir
call_azcopy("sync", "c:/local/path", "https://mystorage.blob.core.windows.net/mycontainer", "--recursive=true")
```
For more information, see the [AzCopy repo on GitHub](https://github.com/Azure/azure-storage-azcopy).
**Note that AzureStor uses AzCopy version 10. It is incompatible with versions 8.1 and earlier.**
## Other features
### Parallel connections
The `storage_multiupload/download` functions mentioned above use a background process pool supplied by AzureRMR. You can also use this pool to parallelise tasks for which there is no built-in function. For example, the following code will delete multiple files in parallel:
```r
files_to_delete <- list_storage_files(container, "datadir", info="name")
# initialise the background pool with 10 nodes
AzureRMR::init_pool(10)
# export the container object to the nodes
AzureRMR::pool_export("cont")
# delete the files
AzureRMR::pool_sapply(files_to_delete, function(f) AzureStor::delete_storage_file(cont, f))
```
### Metadata
To get and set user-defined properties (metadata) for storage objects, use the `get_storage_metadata` and `set_storage_metadata` functions.
```r
fs <- storage_container("https://mystorage.dfs.core.windows.net/myshare", key="access_key")
storage_upload(share, "iris.csv", "newdir/iris.csv")
set_storage_metadata(fs, "newdir/iris.csv", name1="value1")
# will be list(name1="value1")
get_storage_metadata(fs, "newdir/iris.csv")
set_storage_metadata(fs, "newdir/iris.csv", name2="value2")
# will be list(name1="value1", name2="value2")
get_storage_metadata(fs, "newdir/iris.csv")
set_storage_metadata(fs, "newdir/iris.csv", name3="value3", keep_existing=FALSE)
# will be list(name3="value3")
get_storage_metadata(fs, "newdir/iris.csv")
# deleting all metadata
set_storage_metadata(fs, "newdir/iris.csv", keep_existing=FALSE)
# if no filename supplied, get/set metadata for the container
get_storage_metadata(fs)
```
## Admin interface
Finally, AzureStor's admin-side interface allows you to easily create and delete resource accounts, as well as obtain access keys and generate a SAS. Here is a sample workflow:
```r
library(AzureStor)
# authenticate with Resource Manager
az <- AzureRMR::get_azure_login("mytenant")
sub1 <- az$get_subscription("subscription_id")
rg <- sub1$get_resource_group("resgroup")
# get an existing storage account
rdevstor1 <- rg$get_storage("rdevstor1")
rdevstor1
#<Azure resource Microsoft.Storage/storageAccounts/rdevstor1>
# Account type: Storage
# SKU: name=Standard_LRS, tier=Standard
# Endpoints:
# blob: https://rdevstor1.blob.core.windows.net/
# queue: https://rdevstor1.queue.core.windows.net/
# table: https://rdevstor1.table.core.windows.net/
# file: https://rdevstor1.file.core.windows.net/
# ...
# retrieve admin keys
rdevstor1$list_keys()
# create a shared access signature (SAS)
rdevstor1$get_account_sas(permissions="rw")
# obtain an endpoint object for accessing storage (will have the access key included by default)
rdevstor1$get_blob_endpoint()
#Azure blob storage endpoint
#URL: https://rdevstor1.blob.core.windows.net/
#Access key: <hidden>
#Azure Active Directory token: <none supplied>
#Account shared access signature: <none supplied>
#Storage API version: 2018-03-28
# create a new storage account
blobstor2 <- rg$create_storage_account("blobstor2", location="australiaeast", kind="BlobStorage")
# delete it (will ask for confirmation)
blobstor2$delete()
```
For more information about the different types of storage, see the [Microsoft Docs site](https://docs.microsoft.com/en-us/azure/storage/). Note that there are other types of storage (queue, table) that do not have a client interface exposed by AzureStor.
| /scratch/gouwar.j/cran-all/cranData/AzureStor/inst/doc/intro.rmd |
---
title: "How to enable AAD authentication for a storage account"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AAD authentication setup}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
It's possible to authenticate to a storage account using an OAuth token obtained via Azure Active Directory (AAD). This has several advantages:
- You don't need to pass around the storage account's access key, which is like a master password: it controls all access to the account. If it gets compromised, the account is no longer secure.
- You can use role-based access control to limit which users are allowed to use the account, and what actions they can perform.
- Unlike a shared access signature (SAS), AAD authentication doesn't have a hard expiry date. As long as an AAD identity (user, service principal, etc) has the correct permissions, it can always connect to the storage account. Similarly, you can easily revoke access by removing the necessary permissions from the identity.
Here, we'll take you through the steps involved to configure a storage account for AAD authentication. The assumption here is that you're an administrator for your AAD tenant, or have the appropriate rights to create AAD app registrations and set role assignments on resources---if you don't know what these terms mean, you probably don't have such rights!
## Authenticate as a user
Authenticating as a user is relatively straightforward: you can think of it as "logging into" the storage account with your username. This involves the following:
- Create an app registration; this essentially tells Azure that the AzureStor package is allowed to access storage in your tenant
- Give the app the "user_impersonation" delegated permission for storage
- Assign your users the appropriate roles in the storage account
### Create an app registration
You can create a new app registration using any of the usual methods. For example to create an app registration in the Azure Portal (`https://portal.azure.com/`), click on "Azure Active Directory" in the menu bar down the left, go to "App registrations" and click on "New registration". Name the app something suitable, eg "AzureStor R interface to storage".
- If you want your users to be able to login with the authorization code flow, you must add a **public client/native redirect URI** of `http://localhost:1410`. This is appropriate if your users will be running R on their local PCs, with an Internet browser available.
- If you want your users to be able to login with the device code flow, you must **enable the "Allow public client flows" setting** for your app. In the Portal, you can find this setting in the "Authentication" pane once the app registration is complete. This is appropriate if your users are running R in a remote session, for example in RStudio Server, Databricks, or a VM terminal window over ssh.
Once the app registration has been created, note the app ID.
### Set the app permissions
To enable users to authenticate to storage with this app, add the "user_impersonation" delegated permission for the Azure Storage API. In the Portal, you can set this by going to the "API permissions" pane for your app reigstration, then clicking on "Add a permission".
### Give users a role assignment in the storage account
Having registered an app ID for AzureStor, you then add the appropriate role assignments for your users. These role assignments are set for the resource, _not_ the app registration. In the Portal, you can set these by going to your storage account resource, then clicking on "Access Control (IAM)".
The main role assignments to be aware of are:
- **Storage blob data reader**: read (but not write) blob containers and blobs. Because blob storage and ADLS2 storage are interoperable, this role also lets users read ADLS2 filesystems and files.
- **Storage blob data contributor**: read and write blob/ADLS2 containers and files.
- **Storage blob data owner**: read and write blob/ADLS2 containers and files; in addition, allow setting POSIX ACLs for ADLS2.
- **Storage queue data reader**: read (but now write or delete) queues and queue messages.
- **Storage queue data contributor**: read, write and delete queues and queue messages.
- **Storage queue data message sender**: send (write) queue messages.
- **Storage queue data message processor**: read and delete queue messages.
Note that AzureStor does not provide an R interface to queue storage; for that, you can use the AzureQstor package.
### Authenticating
Once this is done, your users can authenticate to storage as follows. Here. `app_id` is the ID of the app registration you've just created.
```r
# obtaining a token from an R session on the local machine
token <- AzureAuth::get_azure_token("https://storage.azure.com", tenant="yourtenant", app="app_id")
# obtaining a token from a remote R session: RStudio Server/Databricks
token <- AzureAuth::get_azure_token("https://storage.azure.com", tenant="yourtenant", app="app_id",
auth_type="device_code")
# use the token to login to storage (blob in this case)
endp <- storage_endpoint("https://yourstorageacct.blob.core.windows.net", token=token)
```
## Authenticate as the application
In the previous section, we described how users can authenticate as themselves with AzureStor. Here, we'll describe how to authenticate as the _application_, that is, without a signed-in user. This is useful in a scenario such as a CI/CD or deployment pipeline that needs to run without user intervention.
The process is as follows:
- Create an app registration as before
- Give the app a client secret
- Assign the app's service principal the appropriate role in the storage account
### Create the app registration and give it a client secret
Creating the app registration is much the same as before, except that you don't need to set a redirect URI or enable public client flows. Instead you give the app a **client secret**, which is much the same as a password (and should similarly be kept secure). In the Portal, you can set this in the "Certificates and Secrets" pane for your app registration.
It's also possible to authenticate with a **client certificate (public key)**, but this is more complex and we won't go into it here. For more details, see the [Azure Active Directory documentation](https://docs.microsoft.com/en-au/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow) and the [AzureAuth intro vignette](https://cran.r-project.org/package=AzureAuth/vignettes/token.html).
### Give the app's service principal a role assignment in the storage account
This is again similar to assigning a user a role, except now you assign it to the service principal for your app. The same roles assignments as before can be used.
### Authenticating
To authenticate as the app, use the following code:
```r
# use the app ID and client secret you noted before
token <- AzureAuth::get_azure_token("https://storage.azure.com", tenant="yourtenant", app="app_id",
password="client_secret")
endp <- storage_endpoint("https://yourstorageacct.blob.core.windows.net", token=token)
```
| /scratch/gouwar.j/cran-all/cranData/AzureStor/vignettes/aad.rmd |
---
title: "AzureStor 2.0 client generics and methods"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{AzureStor generics}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
AzureStor 1.0 defined several functions for working with storage, which were specific to each storage type: blob, file or ADLSgen2. AzureStor 2.0 organises these functions into a consistent framework, using S3 generics and methods.
The client framework for AzureStor 2.0 is described here. While the original interface is still supported, it's recommended that you use the new framework as it protects against specifying the wrong function for a given storage type.
## Storage endpoints and containers
The `storage_endpoint` creates a storage endpoint object based on the URL specified. The `blob_endpoint`, `file_endpoint` and `adls_endpoint` functions do the same thing, but require that the URL and the function match.
```r
# generic endpoint function: storage type inferred from URL
storage_endpoint("https://mystorage.blob.core.windows.net/", ...) # blob endpoint
storage_endpoint("https://mystorage.file.core.windows.net/", ...) # file endpoint
storage_endpoint("https://mystorage.dfs.core.windows.net/", ...) # ADLSgen2 endpoint
# storage-type-specific functions
blob_endpoint("https://mystorage.blob.core.windows.net/", ...) # blob endpoint
file_endpoint("https://mystorage.file.core.windows.net/", ...) # file endpoint
adls_endpoint("https://mystorage.dfs.core.windows.net/", ...) # ADLSgen2 endpoint
# error: using the wrong function for a given storage type
# this is not possible with the new framework
file_endpoint("https://mystorage.blob.core.windows.net/")
```
The following generics are for managing storage containers, given a storage endpoint of a given type (blob, file or ADLSgen2):
- `storage_container`: get a storage container
- `create_storage_container`
- `delete_storage_container`
- `list_storage_containers`
In turn these dispatch to the following lower-level functions for each type of storage:
| Generic | Blob | File | ADLS2 |
| ------- | ---- | ---- | ----- |
| `storage_container` | `blob_container` | `file_share` | `adls_filesystem` |
| `create_storage_container` | `create_blob_container` | `create_file_share` | `create_adls_filesystem` |
| `delete_storage_container` | `delete_blob_container` | `delete_file_share` | `delete_adls_filesystem` |
| `list_storage_containers` | `list_blob_containers` | `list_file_shares` | `list_adls_filesystems` |
```r
# example of working with containers (blob storage)
bl_endp_key <- storage_endpoint("https://mystorage.blob.core.windows.net/", key="mykey")
list_storage_containers(bl_endp_key)
cont <- storage_container(bl_endp, "mycontainer")
newcont <- create_storage_container(bl_endp, "newcontainer")
delete_storage_container(newcont)
# you can also call the lower-level functions directly if desired
bl_endp_key <- blob_endpoint("https://mystorage.blob.core.windows.net/", key="mykey")
list_blob_containers(bl_endp_key)
cont <- blob_container(bl_endp, "mycontainer")
newcont <- create_blob_container(bl_endp, "newcontainer")
delete_blob_container(newcont)
# error: using the wrong function for a given storage type
# this is not possible with the new framework
list_file_shares(bl_endp_key)
```
## Files and blobs
The following generics are for working with objects within a storage container:
- `list_storage_files`: list files/blobs in a directory (for ADLSgen2 and file storage) or blob container
- `create_storage_dir`: for ADLSgen2 and file storage, create a directory
- `delete_storage_dir`: for ADLSgen2 and file storage, delete a directory
- `delete_storage_file`: delete a file or blob
- `storage_upload`/`storage_download`: transfer a file to or from a storage container
- `storage_multiupload`/`storage_multidownload`: transfer multiple files in parallel to or from a storage container
As before, these dispatch to a family of lower-level functions for each type of storage:
| Generic | Blob | File | ADLS2 |
| ------- | ---- | ---- | ----- |
| `list_storage_files` | `list_blobs` | `list_azure_files` | `list_adls_files` |
| `create_storage_dir` | N/A | `create_azure_dir` | `create_adls_dir` |
| `delete_storage_dir` | N/A | `delete_azure_dir` | `delete_adls_dir` |
| `delete_storage_file` | `delete_blob` | `delete_azure_file` | `delete_adls_file` |
| `storage_upload` | `upload_blob` | `upload_azure_file` | `upload_adls_file` |
| `storage_download` | `download_blob` | `download_azure_file` | `download_adls_file` |
| `storage_multiupload` | `multiupload_blob` | `multiupload_azure_file` | `multiupload_adls_file` |
| `storage_multidownload` | `multidownload_blob` | `multidownload_azure_file` | `multidownload_adls_file` |
| `copy_url_to_storage` | `copy_url_to_blob` | N/A | N/A |
| `multicopy_url_to_storage` | `multicopy_url_to_blob` | N/A | N/A |
```r
# example workflow with generics (blob storage)
cont <- storage_container(bl_endp, "mycontainer")
list_storage_files(cont)
storage_upload(cont, "description.txt", "description")
storage_multiupload(cont, "*.tar.gz")
# using lower-level functions
cont <- blob_container(bl_endp, "mycontainer")
list_blobs(cont)
upload_blob(cont, "description.txt", "description")
multiupload_blob(cont, "*.tar.gz")
```
| /scratch/gouwar.j/cran-all/cranData/AzureStor/vignettes/generics.Rmd |
---
title: "Introduction to AzureStor"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AzureStor}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
This is a short introduction on how to use AzureStor.
## Storage endpoints
The interface for accessing storage is similar across blobs, files and ADLSGen2. You call the `storage_endpoint` function and provide the endpoint URI, along with your authentication credentials. AzureStor will figure out the type of storage from the URI.
AzureStor supports all the different ways you can authenticate with a storage endpoint:
- Blob storage supports authenticating with an access key, shared access signature (SAS), or an Azure Active Directory (AAD) OAuth token;
- File storage supports access key and SAS;
- ADLSgen2 supports access key and AAD token.
In the case of an AAD token, you can also provide an object obtained via `AzureAuth::get_azure_token()`. If you do this, AzureStor can automatically refresh the token for you when it expires.
```r
# various endpoints for an account: blob, file, ADLS2
bl_endp_key <- storage_endpoint("https://mystorage.blob.core.windows.net", key="access_key")
fl_endp_sas <- storage_endpoint("https://mystorage.file.core.windows.net", sas="my_sas")
ad_endp_tok <- storage_endpoint("https://mystorage.dfs.core.windows.net", token="my_token")
# alternative (recommended) way of supplying an AAD token
token <- AzureRMR::get_azure_token("https://storage.azure.com",
tenant="myaadtenant", app="app_id", password="mypassword"))
ad_endp_tok2 <- storage_endpoint("https://mystorage.dfs.core.windows.net", token=token)
```
## Listing, creating and deleting containers
AzureStor provides a rich framework for managing storage. The following generics allow you to manage storage containers:
- `storage_container`: get a storage container (blob container, file share or ADLS filesystem)
- `create_storage_container`
- `delete_storage_container`
- `list_storage_containers`
```r
# example of working with containers (blob storage)
list_storage_containers(bl_endp_key)
cont <- storage_container(bl_endp_key, "mycontainer")
newcont <- create_storage_container(bl_endp_key, "newcontainer")
delete_storage_container(newcont)
```
## Files and blobs
These functions for working with objects within a storage container:
- `list_storage_files`: list files/blobs in a directory (for ADLSgen2 and file storage) or blob container
- `create_storage_dir`: for ADLSgen2 and file storage, create a directory
- `delete_storage_dir`: for ADLSgen2 and file storage, delete a directory
- `delete_storage_file`: delete a file or blob
- `storage_file_exists`: check that a file or blob exists
- `storage_upload`/`storage_download`: transfer a file to or from a storage container
- `storage_multiupload`/`storage_multidownload`: transfer multiple files in parallel to or from a storage container
```r
# example of working with files and directories (ADLSgen2)
cont <- storage_container(ad_end_tok, "myfilesystem")
list_storage_files(cont)
create_storage_dir(cont, "newdir")
storage_download(cont, "/readme.txt")
storage_multiupload(cont, "N:/data/*.*", "newdir") # uploading everything in a directory
```
## Uploading and downloading
AzureStor includes a number of extra features to make transferring files efficient.
### Parallel connections
The `storage_multiupload/download` functions transfer multiple files in parallel, which usually results in major speedups when transferring multiple small files. The pool is created the first time a parallel file transfer is performed, and persists for the duration of the R session; this means you don't have to wait for the pool to be (re-)created each time.
```r
# uploading/downloading multiple files at once: use a wildcard to specify files to transfer
storage_multiupload(cont, src="N:/logfiles/*.zip")
storage_multidownload(cont, src="/monthly/jan*.*", dest="~/data/january")
# or supply a vector of file specs as the source and destination
src <- c("file1.csv", "file2.csv", "file3.csv")
dest <- file.path("data/", src)
storage_multiupload(cont, src, dest)
```
### File format helpers
AzureStor includes convenience functions to transfer data in a number of commonly used formats: RDS, RData, TSV (tab-delimited), CSV, and CSV2 (semicolon-delimited). These work via connections and so don't create temporary files on disk.
```r
# save an R object to storage and read it back again
obj <- list(n=42L, x=pi, c="foo")
storage_save_rds(obj, cont, "obj.rds")
objnew <- storage_load_rds(cont, "obj.rds")
identical(obj, objnew) # TRUE
# reading/writing data to CSV format
storage_write_csv(mtcars, cont, "mtcars.csv")
mtnew <- storage_read_csv(cont, "mtcars.csv")
all(mapply(identical, mtcars, mtnew)) # TRUE
```
### Transfer to and from connections
You can upload a (single) in-memory R object via a connection, and similarly, you can download a file to a connection, or return it as a raw vector. This lets you transfer an object without having to create a temporary file as an intermediate step.
```r
# uploading serialized R objects via connections
json <- jsonlite::toJSON(iris, pretty=TRUE, auto_unbox=TRUE)
con <- textConnection(json)
storage_upload(cont, src=con, dest="iris.json")
rds <- serialize(iris, NULL)
con <- rawConnection(rds)
storage_upload(cont, src=con, dest="iris.rds")
# downloading files into memory: as a raw vector with dest=NULL, and via a connection
rawvec <- storage_download(cont, src="iris.json", dest=NULL)
rawToChar(rawvec)
con <- rawConnection(raw(0), "r+")
storage_download(cont, src="iris.rds", dest=con)
unserialize(con)
```
### Copy from URLs (blob storage only)
The `copy_url_to_storage` function lets you transfer the contents of a URL directly to storage, without having to download it to your local machine first. The `multicopy_url_to_storage` function does the same, but for a vector of URLs. Currently, these only work for blob storage.
```r
# copy from a public URL: Iris data from UCI machine learning repository
copy_url_to_storage(cont,
"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
"iris.csv")
# copying files from another storage account, by appending a SAS to the URL(s)
sas <- "?sv=...."
files <- paste0("https://srcstorage.blob.core.windows.net/container/file", 0:9, ".csv", sas)
multicopy_url_to_storage(cont, files)
```
### Appending (blob storage only)
AzureStor supports uploading to append blobs. An append blob is comprised of blocks and is optimized for append operations; it is well suited for data that is constantly growing, but should not be modified once written, such as server logs.
To upload to an append blob, specify `type="AppendBlob"` in the `storage_upload` call. To append data (rather than overwriting an existing blob), include the argument `append=TRUE`. See `?upload_blob` for more details.
```r
# create a new append blob
storage_upload(cont, src="logfile1.csv", dest="logfile.csv", type="AppendBlob")
# appending to an existing blob
storage_upload(cont, src="logfile2.csv", dest="logfile.csv", type="AppendBlob", append=TRUE)
```
### Interface to AzCopy
AzureStor includes an interface to [AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10), Microsoft's high-performance commandline utility for copying files to and from storage. To take advantage of this, simply include the argument `use_azcopy=TRUE` on any upload or download function. AzureStor will then call AzCopy to perform the file transfer, rather than using its own internal code. In addition, a `call_azcopy` function is provided to let you use AzCopy for any task.
```r
# use azcopy to download
myfs <- storage_container(ad_endp, "myfilesystem")
storage_download(myfs, "/incoming/bigfile.tar.gz", "/data", use_azcopy=TRUE)
# use azcopy to sync a local and remote dir
call_azcopy("sync", "c:/local/path", "https://mystorage.blob.core.windows.net/mycontainer", "--recursive=true")
```
For more information, see the [AzCopy repo on GitHub](https://github.com/Azure/azure-storage-azcopy).
**Note that AzureStor uses AzCopy version 10. It is incompatible with versions 8.1 and earlier.**
## Other features
### Parallel connections
The `storage_multiupload/download` functions mentioned above use a background process pool supplied by AzureRMR. You can also use this pool to parallelise tasks for which there is no built-in function. For example, the following code will delete multiple files in parallel:
```r
files_to_delete <- list_storage_files(container, "datadir", info="name")
# initialise the background pool with 10 nodes
AzureRMR::init_pool(10)
# export the container object to the nodes
AzureRMR::pool_export("cont")
# delete the files
AzureRMR::pool_sapply(files_to_delete, function(f) AzureStor::delete_storage_file(cont, f))
```
### Metadata
To get and set user-defined properties (metadata) for storage objects, use the `get_storage_metadata` and `set_storage_metadata` functions.
```r
fs <- storage_container("https://mystorage.dfs.core.windows.net/myshare", key="access_key")
storage_upload(share, "iris.csv", "newdir/iris.csv")
set_storage_metadata(fs, "newdir/iris.csv", name1="value1")
# will be list(name1="value1")
get_storage_metadata(fs, "newdir/iris.csv")
set_storage_metadata(fs, "newdir/iris.csv", name2="value2")
# will be list(name1="value1", name2="value2")
get_storage_metadata(fs, "newdir/iris.csv")
set_storage_metadata(fs, "newdir/iris.csv", name3="value3", keep_existing=FALSE)
# will be list(name3="value3")
get_storage_metadata(fs, "newdir/iris.csv")
# deleting all metadata
set_storage_metadata(fs, "newdir/iris.csv", keep_existing=FALSE)
# if no filename supplied, get/set metadata for the container
get_storage_metadata(fs)
```
## Admin interface
Finally, AzureStor's admin-side interface allows you to easily create and delete resource accounts, as well as obtain access keys and generate a SAS. Here is a sample workflow:
```r
library(AzureStor)
# authenticate with Resource Manager
az <- AzureRMR::get_azure_login("mytenant")
sub1 <- az$get_subscription("subscription_id")
rg <- sub1$get_resource_group("resgroup")
# get an existing storage account
rdevstor1 <- rg$get_storage("rdevstor1")
rdevstor1
#<Azure resource Microsoft.Storage/storageAccounts/rdevstor1>
# Account type: Storage
# SKU: name=Standard_LRS, tier=Standard
# Endpoints:
# blob: https://rdevstor1.blob.core.windows.net/
# queue: https://rdevstor1.queue.core.windows.net/
# table: https://rdevstor1.table.core.windows.net/
# file: https://rdevstor1.file.core.windows.net/
# ...
# retrieve admin keys
rdevstor1$list_keys()
# create a shared access signature (SAS)
rdevstor1$get_account_sas(permissions="rw")
# obtain an endpoint object for accessing storage (will have the access key included by default)
rdevstor1$get_blob_endpoint()
#Azure blob storage endpoint
#URL: https://rdevstor1.blob.core.windows.net/
#Access key: <hidden>
#Azure Active Directory token: <none supplied>
#Account shared access signature: <none supplied>
#Storage API version: 2018-03-28
# create a new storage account
blobstor2 <- rg$create_storage_account("blobstor2", location="australiaeast", kind="BlobStorage")
# delete it (will ask for confirmation)
blobstor2$delete()
```
For more information about the different types of storage, see the [Microsoft Docs site](https://docs.microsoft.com/en-us/azure/storage/). Note that there are other types of storage (queue, table) that do not have a client interface exposed by AzureStor.
| /scratch/gouwar.j/cran-all/cranData/AzureStor/vignettes/intro.rmd |
#' @import AzureRMR
#' @import AzureStor
NULL
utils::globalVariables(c("self", "private"))
.onLoad <- function(libname, pkgname)
{
AzureStor::az_storage$set("public", "get_table_endpoint", overwrite=TRUE,
function(key=self$list_keys()[1], sas=NULL, token=NULL)
{
table_endpoint(self$properties$primaryEndpoints$table, key=key, sas=sas, token=token)
})
}
# assorted imports of friend functions
sign_sha256 <- get("sign_sha256", getNamespace("AzureStor"))
is_endpoint_url <- get("is_endpoint_url", getNamespace("AzureStor"))
delete_confirmed <- get("delete_confirmed", getNamespace("AzureStor"))
storage_error_message <- get("storage_error_message", getNamespace("AzureStor"))
process_storage_response <- get("process_storage_response", getNamespace("AzureStor"))
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/R/AzureTableStor.R |
#' @export
sign_request.table_endpoint <- function(endpoint, verb, url, headers, api, ...)
{
make_sig <- function(key, verb, acct_name, resource, headers)
{
names(headers) <- tolower(names(headers))
sigstr <- paste(verb,
as.character(headers[["content-md5"]]),
as.character(headers[["content-type"]]),
as.character(headers[["date"]]),
resource, sep = "\n")
sigstr <- sub("\n$", "", sigstr)
paste0("SharedKey ", acct_name, ":", sign_sha256(sigstr, key))
}
acct_name <- sub("\\..+$", "", url$host)
resource <- paste0("/", acct_name, "/", url$path)
resource <- gsub("//", "/", resource)
if (is.null(headers$date) || is.null(headers$Date))
headers$date <- httr::http_date(Sys.time())
if (is.null(headers$`x-ms-version`))
headers$`x-ms-version` <- api
sig <- make_sig(endpoint$key, verb, acct_name, resource, headers)
utils::modifyList(headers, list(Host=url$host, Authorization=sig))
}
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/R/sign_request.R |
#' Operations with azure tables
#'
#' @param endpoint An object of class `table_endpoint` or, for `create_storage_table.storage_table`, an object of class `storage_table`.
#' @param name The name of a table in a storage account.
#' @param confirm For deleting a table, whether to ask for confirmation.
#' @param ... Other arguments passed to lower-level functions.
#' @rdname storage_table
#' @details
#' These methods are for accessing and managing tables within a storage account.
#' @return
#' `storage_table` and `create_storage_table` return an object of class `storage_table`. `list_storage_tables` returns a list of such objects.
#' @seealso
#' [table_endpoint], [table_entity]
#' @examples
#' \dontrun{
#'
#' endp <- table_endpoint("https://mystorageacct.table.core.windows.net", key="mykey")
#'
#' create_storage_table(endp, "mytable")
#' tab <- storage_table(endp, "mytable2")
#' create_storage_table(tab)
#' list_storage_tables(endp)
#' delete_storage_table(tab)
#' delete_storage_table(endp, "mytable")
#'
#' }
#' @export
storage_table <- function(endpoint, ...)
{
UseMethod("storage_table")
}
#' @rdname storage_table
#' @export
storage_table.table_endpoint <- function(endpoint, name, ...)
{
structure(list(endpoint=endpoint, name=name), class="storage_table")
}
#' @rdname storage_table
#' @export
list_storage_tables <- function(endpoint, ...)
{
UseMethod("list_storage_tables")
}
#' @rdname storage_table
#' @export
list_storage_tables.table_endpoint <- function(endpoint, ...)
{
opts <- list()
val <- list()
repeat
{
res <- call_table_endpoint(endpoint, "Tables", options=opts, http_status_handler="pass")
httr::stop_for_status(res, storage_error_message(res))
heads <- httr::headers(res)
res <- httr::content(res)
val <- c(val, res$value)
if(is.null(heads$`x-ms-continuation-NextTableName`))
break
opts$NextTableName <- heads$`x-ms-continuation-NextTableName`
}
named_list(lapply(val, function(x) storage_table(endpoint, x$TableName)))
}
#' @rdname storage_table
#' @export
create_storage_table <- function(endpoint, ...)
{
UseMethod("create_storage_table")
}
#' @rdname storage_table
#' @export
create_storage_table.table_endpoint <- function(endpoint, name, ...)
{
res <- call_table_endpoint(endpoint, "Tables", body=list(TableName=name), ..., http_verb="POST")
storage_table(endpoint, res$TableName)
}
#' @rdname storage_table
#' @export
create_storage_table.storage_table <- function(endpoint, ...)
{
create_storage_table(endpoint$endpoint, endpoint$name)
}
#' @rdname storage_table
#' @export
delete_storage_table <- function(endpoint, ...)
{
UseMethod("delete_storage_table")
}
#' @rdname storage_table
#' @export
delete_storage_table.table_endpoint <- function(endpoint, name, confirm=TRUE, ...)
{
if(!delete_confirmed(confirm, name, "table"))
return(invisible(NULL))
path <- sprintf("Tables('%s')", name)
invisible(call_table_endpoint(endpoint, path, http_verb="DELETE"))
}
#' @rdname storage_table
#' @export
delete_storage_table.storage_table <- function(endpoint, ...)
{
delete_storage_table(endpoint$endpoint, endpoint$name, ...)
}
#' @export
print.storage_table <- function(x, ...)
{
cat("Azure table '", x$name, "'\n",
sep = "")
url <- httr::parse_url(x$endpoint$url)
url$path <- x$name
cat(sprintf("URL: %s\n", httr::build_url(url)))
if (!is_empty(x$endpoint$key))
cat("Access key: <hidden>\n")
else cat("Access key: <none supplied>\n")
if (!is_empty(x$endpoint$token)) {
cat("Azure Active Directory access token:\n")
print(x$endpoint$token)
}
else cat("Azure Active Directory access token: <none supplied>\n")
if (!is_empty(x$endpoint$sas))
cat("Account shared access signature: <hidden>\n")
else cat("Account shared access signature: <none supplied>\n")
cat(sprintf("Storage API version: %s\n", x$endpoint$api_version))
invisible(x)
}
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/R/storage_tables.R |
#' Batch transactions for table storage
#'
#' @param endpoint A table storage endpoint, of class `table_endpoint`.
#' @param path The path component of the operation.
#' @param options A named list giving the query parameters for the operation.
#' @param headers A named list giving any additional HTTP headers to send to the host. AzureCosmosR will handle authentication details, so you don't have to specify these here.
#' @param body The request body for a PUT/POST/PATCH operation.
#' @param metadata The level of ODATA metadata to include in the response.
#' @param http_verb The HTTP verb (method) for the operation.
#' @param operations For `create_batch_transaction`, a list of individual operations to be batched up.
#' @param transaction For `do_batch_transaction`, an object of class `batch_transaction`.
#' @param operations A list of individual table operation objects, each of class `table_operation`.
#' @param batch_status_handler For `do_batch_transaction`, what to do if one or more of the batch operations fails. The default is to signal a warning and return a list of response objects, from which the details of the failure(s) can be determined. Set this to "pass" to ignore the failure.
#' @param num_retries The number of times to retry the call, if the response is a HTTP error 429 (too many requests). The Cosmos DB endpoint tends to be aggressive at rate-limiting requests, to maintain the desired level of latency. This will generally not affect calls to an endpoint provided by a storage account.
#' @param ... Arguments passed to lower-level functions.
#'
#' @details
#' Table storage supports batch transactions on entities that are in the same table and belong to the same partition group. Batch transactions are also known as _entity group transactions_.
#'
#' You can use `create_table_operation` to produce an object corresponding to a single table storage operation, such as inserting, deleting or updating an entity. Multiple such objects can then be passed to `create_batch_transaction`, which bundles them into a single atomic transaction. Call `do_batch_transaction` to send the transaction to the endpoint.
#'
#' Note that batch transactions are subject to some limitations imposed by the REST API:
#' - All entities subject to operations as part of the transaction must have the same `PartitionKey` value.
#' - An entity can appear only once in the transaction, and only one operation may be performed against it.
#' - The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size.
#'
#' @return
#' `create_table_operation` returns an object of class `table_operation`.
#'
#' Assuming the batch transaction did not fail due to rate-limiting, `do_batch_transaction` returns a list of objects of class `table_operation_response`, representing the results of each individual operation. Each object contains elements named `status`, `headers` and `body` containing the respective parts of the response. Note that the number of returned objects may be smaller than the number of operations in the batch, if the transaction failed.
#' @seealso
#' [import_table_entities], which uses (multiple) batch transactions under the hood
#'
#' [Performing entity group transactions](https://docs.microsoft.com/en-us/rest/api/storageservices/performing-entity-group-transactions)
#' @examples
#' \dontrun{
#'
#' endp <- table_endpoint("https://mycosmosdb.table.cosmos.azure.com:443", key="mykey")
#' tab <- create_storage_table(endp, "mytable")
#'
#' ## a simple batch insert
#' ir <- subset(iris, Species == "setosa")
#'
#' # property names must be valid C# variable names
#' names(ir) <- sub("\\.", "_", names(ir))
#'
#' # create the PartitionKey and RowKey properties
#' ir$PartitionKey <- ir$Species
#' ir$RowKey <- sprintf("%03d", seq_len(nrow(ir)))
#'
#' # generate the array of insert operations: 1 per row
#' ops <- lapply(seq_len(nrow(ir)), function(i)
#' create_table_operation(endp, "mytable", body=ir[i, ], http_verb="POST")))
#'
#' # create a batch transaction and send it to the endpoint
#' bat <- create_batch_transaction(endp, ops)
#' do_batch_transaction(bat)
#'
#' }
#' @rdname table_batch
#' @export
create_table_operation <- function(endpoint, path, options=list(), headers=list(), body=NULL,
metadata=c("none", "minimal", "full"), http_verb=c("GET", "PUT", "POST", "PATCH", "DELETE", "HEAD"))
{
headers <- utils::modifyList(headers, list(DataServiceVersion="3.0;NetFx"))
if(!is.null(metadata))
{
accept <- switch(match.arg(metadata),
"none"="application/json;odata=nometadata",
"minimal"="application/json;odata=minimalmetadata",
"full"="application/json;odata=fullmetadata")
headers$Accept <- accept
}
obj <- list()
obj$endpoint <- endpoint
obj$path <- path
obj$options <- options
obj$headers <- headers
obj$method <- match.arg(http_verb)
obj$body <- body
structure(obj, class="table_operation")
}
serialize_table_operation <- function(object)
{
UseMethod("serialize_table_operation")
}
serialize_table_operation.table_operation <- function(object)
{
url <- httr::parse_url(object$endpoint$url)
url$path <- object$path
url$query <- object$options
preamble <- c(
"Content-Type: application/http",
"Content-Transfer-Encoding: binary",
"",
paste(object$method, httr::build_url(url), "HTTP/1.1"),
paste0(names(object$headers), ": ", object$headers),
if(!is.null(object$body)) "Content-Type: application/json"
)
if(is.null(object$body))
preamble
else if(!is.character(object$body))
{
body <- jsonlite::toJSON(object$body, auto_unbox=TRUE, null="null")
# special-case treatment for 1-row dataframes
if(is.data.frame(object$body) && nrow(object$body) == 1)
body <- substr(body, 2, nchar(body) - 1)
c(preamble, "", body)
}
else c(preamble, "", object$body)
}
#' @rdname table_batch
#' @export
create_batch_transaction <- function(endpoint, operations)
{
structure(list(endpoint=endpoint, ops=operations), class="batch_transaction")
}
#' @rdname table_batch
#' @export
do_batch_transaction <- function(transaction, ...)
{
UseMethod("do_batch_transaction")
}
#' @rdname table_batch
#' @export
do_batch_transaction.batch_transaction <- function(transaction,
batch_status_handler=c("warn", "stop", "message", "pass"), num_retries=10, ...)
{
# batch REST API only supports 1 changeset per batch, and is unlikely to change
batch_bound <- paste0("batch_", uuid::UUIDgenerate())
changeset_bound <- paste0("changeset_", uuid::UUIDgenerate())
headers <- list(`Content-Type`=paste0("multipart/mixed; boundary=", batch_bound))
batch_preamble <- c(
paste0("--", batch_bound),
paste0("Content-Type: multipart/mixed; boundary=", changeset_bound),
""
)
batch_postscript <- c(
"",
paste0("--", changeset_bound, "--"),
paste0("--", batch_bound, "--")
)
serialized <- lapply(transaction$ops,
function(op) c(paste0("--", changeset_bound), serialize_table_operation(op)))
body <- paste0(c(batch_preamble, unlist(serialized), batch_postscript), collapse="\n")
if(nchar(body) > 4194304)
stop("Batch request too large, must be 4MB or less")
for(i in seq_len(num_retries))
{
res <- call_table_endpoint(transaction$endpoint, "$batch", headers=headers, body=body, encode="raw",
http_verb="POST")
reslst <- process_batch_response(res)
statuses <- sapply(reslst, `[[`, "status")
complete <- all(statuses != 429)
if(complete)
break
Sys.sleep(1.5^i)
}
if(!complete)
httr::stop_for_status(429, "complete batch transaction")
batch_status_handler <- match.arg(batch_status_handler)
if(any(statuses >= 300) && batch_status_handler != "pass")
{
msg <- paste("Batch transaction failed, max status code was", max(statuses))
switch(batch_status_handler,
"stop"=stop(msg, call.=FALSE),
"warn"=warning(msg, call.=FALSE),
"message"=message(msg, call.=FALSE)
)
}
statuses
}
process_batch_response <- function(response)
{
# assume response (including body) is always text
response <- rawToChar(response)
lines <- strsplit(response, "\r?\n\r?")[[1]]
batch_bound <- lines[1]
changeset_bound <- sub("^.+boundary=(.+)$", "\\1", lines[2])
n <- length(lines)
# assume only 1 changeset
batch_end <- grepl(batch_bound, lines[n])
if(!any(batch_end))
stop("Invalid batch response, batch boundary not found", call.=FALSE)
changeset_end <- grepl(changeset_bound, lines[n-1])
if(!any(changeset_end))
stop("Invalid batch response, changeset boundary not found", call.=FALSE)
lines <- lines[3:(n-3)]
op_bounds <- grep(changeset_bound, lines)
Map(
function(start, end) process_operation_response(lines[seq(start, end)]),
op_bounds + 1,
c(op_bounds[-1], length(lines))
)
}
process_operation_response <- function(response)
{
blanks <- which(response == "")
if(length(blanks) < 2)
stop("Invalid operation response", call.=FALSE)
headers <- response[seq(blanks[1]+1, blanks[2]-1)] # skip over http stuff
status <- as.numeric(sub("^.+ (\\d{3}) .+$", "\\1", headers[1]))
headers <- strsplit(headers[-1], ": ")
names(headers) <- sapply(headers, `[[`, 1)
headers <- sapply(headers, `[[`, 2, simplify=FALSE)
class(headers) <- c("insensitive", "list")
body <- if(!(status %in% c(204, 205)) && blanks[2] < length(response))
response[seq(blanks[2]+1, length(response))]
else NULL
obj <- list(status=status, headers=headers, body=body)
class(obj) <- "table_operation_response"
obj
}
#' @export
print.table_operation <- function(x, ...)
{
cat("<Table storage batch operation>\n")
invisible(x)
}
#' @export
print.table_operation_response <- function(x, ...)
{
cat("<Table storage batch operation response>\n")
invisible(x)
}
#' @export
print.batch_transaction <- function(x, ...)
{
cat("<Table storage batch transaction>\n")
invisible(x)
}
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/R/table_batch_request.R |
#' Table storage endpoint
#'
#' Table storage endpoint object, and method to call it.
#'
#' @param endpoint For `table_endpoint`, the URL of the table service endpoint. This will be of the form `https://{account-name}.table.core.windows.net` if the service is provided by a storage account in the Azure public cloud, while for a CosmosDB database, it will be of the form `https://{account-name}.table.cosmos.azure.com:443`. For `call_table_endpoint`, an object of class `table_endpoint`.
#' @param key The access key for the storage account.
#' @param token An Azure Active Directory (AAD) authentication token. For compatibility with AzureStor; not used for table storage.
#' @param sas A shared access signature (SAS) for the account. At least one of `key` or `sas` should be provided.
#' @param api_version The storage API version to use when interacting with the host. Defaults to "2019-07-07".
#' @param path For `call_table_endpoint`, the path component of the endpoint call.
#' @param options For `call_table_endpoint`, a named list giving the query parameters for the operation.
#' @param headers For `call_table_endpoint`, a named list giving any additional HTTP headers to send to the host. AzureCosmosR will handle authentication details, so you don't have to specify these here.
#' @param body For `call_table_endpoint`, the request body for a PUT/POST/PATCH call.
#' @param http_verb For `call_table_endpoint`, the HTTP verb (method) of the operation.
#' @param http_status_handler For `call_table_endpoint`, the R handler for the HTTP status code of the response. ``"stop"``, ``"warn"`` or ``"message"`` will call the corresponding handlers in httr, while ``"pass"`` ignores the status code. The latter is primarily useful for debugging purposes.
#' @param return_headers For `call_table_endpoint`, whether to return the (parsed) response headers instead of the body. Ignored if `http_status_handler="pass"`.
#' @param metadata For `call_table_endpoint`, the level of ODATA metadata to include in the response.
#' @param num_retries The number of times to retry the call, if the response is a HTTP error 429 (too many requests). The Cosmos DB endpoint tends to be aggressive at rate-limiting requests, to maintain the desired level of latency. This will generally not affect calls to an endpoint provided by a storage account.
#' @param ... For `call_table_endpoint`, further arguments passed to `AzureStor::call_storage_endpoint` and `httr::VERB`.
#'
#' @return
#' `table_endpoint` returns an object of class `table_endpoint`, inheriting from `storage_endpoint`. This is the analogue of the `blob_endpoint`, `file_endpoint` and `adls_endpoint` classes provided by the AzureStor package.
#'
#' `call_table_endpoint` returns the body of the response by default, or the headers if `return_headers=TRUE`. If `http_status_handler="pass"`, it returns the entire response object without modification.
#'
#' @seealso
#' [storage_table], [table_entity], [AzureStor::call_storage_endpoint]
#'
#' [Table service REST API reference](https://docs.microsoft.com/en-us/rest/api/storageservices/table-service-rest-api)
#'
#' [Authorizing requests to Azure storage services](https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-requests-to-azure-storage)
#' @examples
#' \dontrun{
#'
#' # storage account table endpoint
#' table_endpoint("https://mystorageacct.table.core.windows.net", key="mykey")
#'
#' # Cosmos DB table endpoint
#' table_endpoint("https://mycosmosdb.table.cosmos.azure.com:443", key="mykey")
#'
#' }
#' @rdname table_endpoint
#' @export
table_endpoint <- function(endpoint, key=NULL, token=NULL, sas=NULL,
api_version=getOption("azure_storage_api_version"))
{
if(!is_endpoint_url(endpoint, "table"))
warning("Not a recognised table endpoint", call.=FALSE)
if(!is.null(token))
{
warning("Table storage does not use Azure Active Directory authentication")
token <- NULL
}
obj <- list(url=endpoint, key=key, token=token, sas=sas, api_version=api_version)
class(obj) <- c("table_endpoint", "storage_endpoint")
obj
}
#' @rdname table_endpoint
#' @export
call_table_endpoint <- function(endpoint, path, options=list(), headers=list(), body=NULL, ...,
http_verb=c("GET", "DELETE", "PUT", "POST", "HEAD", "PATCH"),
http_status_handler=c("stop", "warn", "message", "pass"),
return_headers=(http_verb == "HEAD"),
metadata=c("none", "minimal", "full"),
num_retries=10)
{
headers <- utils::modifyList(headers, list(DataServiceVersion="3.0;NetFx"))
if(!is.null(metadata))
{
accept <- switch(match.arg(metadata),
"none"="application/json;odata=nometadata",
"minimal"="application/json;odata=minimalmetadata",
"full"="application/json;odata=fullmetadata")
headers$Accept <- accept
}
if(is.list(body))
{
body <- jsonlite::toJSON(body, auto_unbox=TRUE, null="null")
headers$`Content-Length` <- nchar(body)
headers$`Content-Type` <- "application/json"
}
http_verb <- match.arg(http_verb)
# handle possible rate limiting in Cosmos DB
for(i in seq_len(num_retries))
{
res <- call_storage_endpoint(endpoint, path=path, options=options, body=body, headers=headers,
http_verb=http_verb, http_status_handler="pass", ...)
if(httr::status_code(res) != 429)
break
Sys.sleep(1.5^i)
}
process_storage_response(res, match.arg(http_status_handler), return_headers)
}
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/R/table_endpoint.R |
#' Operations on table entities (rows)
#'
#' @param table A table object, of class `storage_table`.
#' @param entity For `insert_table_entity` and `update_table_entity`, a named list giving the properties (columns) of the entity. See 'Details' below.
#' @param data For `import_table_entities`, a data frame. See 'Details' below.
#' @param row_key,partition_key For `get_table_entity`, `update_table_entity` and `delete_table_entity`, the row and partition key values that identify the entity to get, update or delete. For `import_table_entities`, the columns in the imported data to treat as the row and partition keys. The default is to use columns named 'RowKey' and 'PartitionKey' respectively.
#' @param etag For `update_table_entity` and `delete_table_entity`, an optional Etag value. If this is supplied, the update or delete operation will proceed only if the target entity's Etag matches this value. This ensures that an entity is only updated/deleted if it has not been modified since it was last retrieved.
#' @param filter,select For `list_table_entities`, optional row filter and column select expressions to subset the result with. If omitted, `list_table_entities` will return all entities in the table.
#' @param as_data_frame For `list_table_entities`, whether to return the results as a data frame, rather than a list of table rows.
#' @param batch_status_handler For `import_table_entities`, what to do if one or more of the batch operations fails. The default is to signal a warning and return a list of response objects, from which the details of the failure(s) can be determined. Set this to "pass" to ignore the failure.
#' @param ... For `import_table_entities`, further named arguments passed to `do_batch_transaction`.
#'
#' @details
#' These functions operate on rows of a table, also known as _entities_. `insert`, `get`, `update` and `delete_table_entity` operate on an individual row. `import_table_entities` bulk-inserts multiple rows of data into the table, using batch transactions. `list_table_entities` queries the table and returns multiple rows, subsetted on the `filter` and `select` arguments.
#'
#' Table storage imposes the following requirements for properties (columns) of an entity:
#' - There must be properties named `RowKey` and `PartitionKey`, which together form the entity's unique identifier. These properties must be of type character.
#' - The property `Timestamp` cannot be used (strictly speaking, it is reserved by the system).
#' - There can be at most 255 properties per entity, although different entities can have different properties.
#' - Table properties must be atomic. In particular, they cannot be nested lists.
#'
#' Note that table storage does _not_ require that all entities in a table must have the same properties.
#'
#' For `insert_table_entity`, `update_table_entity` and `import_table_entities`, you can also specify JSON text representing the data to insert/update/import, instead of a list or data frame.
#'
#' `list_table_entities(as_data_frame=TRUE)` for a large table may be slow. If this is a problem, and you know that all entities in the table have the same schema, try setting `as_data_frame=FALSE` and converting to a data frame manually.
#' @return
#' `insert_table_entity` and `update_table_entity` return the Etag of the inserted/updated entity, invisibly.
#'
#' `get_table_entity` returns a named list of properties for the given entity.
#'
#' `list_table_entities` returns a data frame if `as_data_frame=TRUE`, and a list of entities (rows) otherwise.
#'
#' `import_table_entities` invisibly returns a named list, with one component for each value of the `PartitionKey` column. Each component contains the results of the individual operations to insert each row into the table.
#'
#' @seealso
#' [storage_table], [do_batch_transaction]
#'
#' [Understanding the table service data model](https://docs.microsoft.com/en-us/rest/api/storageservices/understanding-the-table-service-data-model)
#' @examples
#' \dontrun{
#'
#' endp <- table_endpoint("https://mycosmosdb.table.cosmos.azure.com:443", key="mykey")
#' tab <- create_storage_table(endp, "mytable")
#'
#' insert_table_entity(tab, list(
#' RowKey="row1",
#' PartitionKey="partition1",
#' firstname="Bill",
#' lastname="Gates"
#' ))
#'
#' get_table_entity(tab, "row1", "partition1")
#'
#' # specifying the entity as JSON text instead of a list
#' update_table_entity(tab,
#' '{
#' "RowKey": "row1",
#' "PartitionKey": "partition1",
#' "firstname": "Bill",
#' "lastname": "Gates"
#' }')
#'
#' # we can import to the same table as above: table storage doesn't enforce a schema
#' import_table_entities(tab, mtcars,
#' row_key=row.names(mtcars),
#' partition_key=as.character(mtcars$cyl))
#'
#' list_table_entities(tab)
#' list_table_entities(tab, filter="firstname eq 'Satya'")
#' list_table_entities(tab, filter="RowKey eq 'Toyota Corolla'")
#'
#' delete_table_entity(tab, "row1", "partition1")
#'
#' }
#' @aliases table_entity
#' @rdname table_entity
#' @export
insert_table_entity <- function(table, entity)
{
if(is.character(entity) && jsonlite::validate(entity))
entity <- jsonlite::fromJSON(entity, simplifyDataFrame=FALSE)
else if(is.data.frame(entity))
{
if(nrow(entity) == 1) # special-case treatment for 1-row dataframes
entity <- unclass(entity)
else stop("Can only insert one entity at a time; use import_table_entities() to insert multiple entities",
call.=FALSE)
}
check_column_names(entity)
headers <- list(Prefer="return-no-content")
res <- call_table_endpoint(table$endpoint, table$name, body=entity, headers=headers, http_verb="POST",
return_headers=TRUE)
res$etag
}
#' @rdname table_entity
#' @export
update_table_entity <- function(table, entity, row_key=NULL, partition_key=NULL, etag=NULL)
{
if(is.character(entity) && jsonlite::validate(entity))
entity <- jsonlite::fromJSON(entity, simplifyDataFrame=FALSE)
else if(is.data.frame(entity))
{
if(nrow(entity) == 1) # special-case treatment for 1-row dataframes
entity <- unclass(entity)
else stop("Can only update one entity at a time", call.=FALSE)
}
if(!is.null(row_key))
entity$RowKey <- row_key
if(!is.null(partition_key))
entity$PartitionKey <- partition_key
check_column_names(entity)
headers <- if(!is.null(etag))
list(`If-Match`=etag)
else list()
path <- sprintf("%s(PartitionKey='%s',RowKey='%s')", table$name, entity$PartitionKey, entity$RowKey)
res <- call_table_endpoint(table$endpoint, path, body=entity, headers=headers, http_verb="PUT",
return_headers=TRUE)
res$etag
}
#' @rdname table_entity
#' @export
delete_table_entity <- function(table, row_key, partition_key, etag=NULL)
{
path <- sprintf("%s(PartitionKey='%s',RowKey='%s')", table$name, partition_key, row_key)
if(is.null(etag))
etag <- "*"
headers <- list(`If-Match`=etag)
invisible(call_table_endpoint(table$endpoint, path, headers=headers, http_verb="DELETE"))
}
#' @rdname table_entity
#' @export
list_table_entities <- function(table, filter=NULL, select=NULL, as_data_frame=TRUE)
{
path <- sprintf("%s()", table$name)
opts <- list(
`$filter`=filter,
`$select`=paste0(select, collapse=",")
)
val <- list()
repeat
{
res <- call_table_endpoint(table$endpoint, path, options=opts, http_status_handler="pass")
httr::stop_for_status(res, storage_error_message(res))
heads <- httr::headers(res)
res <- httr::content(res)
val <- c(val, res$value)
if(is.null(heads$`x-ms-continuation-NextPartitionKey`))
break
opts$NextPartitionKey <- heads$`x-ms-continuation-NextPartitionKey`
opts$NextRowKey <- heads$`x-ms-continuation-NextRowKey`
}
# table storage allows columns to vary by row, so cannot use base::rbind
if(as_data_frame)
do.call(vctrs::vec_rbind, lapply(val, as.data.frame, stringsAsFactors=FALSE, optional=TRUE))
else val
}
#' @rdname table_entity
#' @export
get_table_entity <- function(table, row_key, partition_key, select=NULL)
{
path <- sprintf("%s(PartitionKey='%s',RowKey='%s')", table$name, partition_key, row_key)
opts <- if(!is.null(select))
list(`$select`=paste0(select, collapse=","))
else list()
call_table_endpoint(table$endpoint, path, options=opts)
}
#' @rdname table_entity
#' @export
import_table_entities <- function(table, data, row_key=NULL, partition_key=NULL,
batch_status_handler=c("warn", "stop", "message", "pass"), ...)
{
if(is.character(data) && jsonlite::validate(data))
data <- jsonlite::fromJSON(data, simplifyDataFrame=TRUE)
if(!is.null(row_key))
data$RowKey <- row_key
if(!is.null(partition_key))
data$PartitionKey <- partition_key
check_column_names(data)
endpoint <- table$endpoint
path <- table$name
headers <- list(Prefer="return-no-content")
batch_status_handler <- match.arg(batch_status_handler)
lst <- lapply(split(data, data$PartitionKey), function(dfpart)
{
n <- nrow(dfpart)
nchunks <- n %/% 100 + (n %% 100 > 0)
lapply(seq_len(nchunks), function(chunk)
{
rows <- seq(from=(chunk-1)*100 + 1, to=min(chunk*100, n))
dfchunk <- dfpart[rows, ]
ops <- lapply(seq_len(nrow(dfchunk)), function(i)
create_table_operation(endpoint, path, body=dfchunk[i, ], headers=headers, http_verb="POST"))
create_batch_transaction(endpoint, ops)
})
})
res <- lapply(unlist(lst, recursive=FALSE, use.names=FALSE), do_batch_transaction,
batch_status_handler=batch_status_handler, ...)
invisible(res)
}
check_column_names <- function(data)
{
if(!("PartitionKey" %in% names(data)) || !("RowKey" %in% names(data)))
stop("Data must contain columns named 'PartitionKey' and 'RowKey'", call.=FALSE)
if(!(is.character(data$PartitionKey) || is.factor(data$PartitionKey)) ||
!(is.character(data$RowKey) || is.factor(data$RowKey)))
stop("RowKey and PartitionKey columns must be character or factor", call.=FALSE)
if("Timestamp" %in% names(data))
stop("'Timestamp' column is reserved for system use", call.=FALSE)
}
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/R/table_entity.R |
---
title: "Introduction to AzureTableStor"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AzureTableStor}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
## Azure table storage
Azure table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Access to table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
You can use table storage to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
AzureTableStor is an R interface to table storage, building on the functionality provided by the [AzureStor](https://github.com/Azure/AzureStor) package. The table storage service is available both as part of a storage account and via Azure Cosmos DB; AzureTableStor is able to work with either.
## Tables
AzureTableStor provides a `table_endpoint` function that is the analogue of AzureStor's `blob_endpoint`, `file_endpoint` and `adls_endpoint` functions. There are methods for retrieving, creating, listing and deleting tables within the endpoint.
```r
library(AzureTableStor)
# storage account endpoint
endp <- table_endpoint("https://mystorageacct.table.core.windows.net", key="mykey")
# Cosmos DB w/table API endpoint
endp <- table_endpoint("https://mycosmosdb.table.cosmos.azure.com:443", key="mykey")
create_storage_table(endp, "mytable")
list_storage_tables(endp)
tab <- storage_table(endp, "mytable")
```
## Entities
In table storage jargon, an _entity_ is a row in a table. The columns of the table are _properties_. Note that table storage does not enforce a schema; that is, individual entities in a table can have different properties. An entity is identified by its `RowKey` and `PartitionKey` properties, which must be unique for each entity.
AzureTableStor provides the following functions to work with data in a table:
- `insert_table_entity`: inserts a row into the table.
- `update_table_entity`: updates a row with new data, or inserts a new row if it doesn't already exist.
- `get_table_entity`: retrieves an individual row from the table.
- `delete_table_entity`: deletes a row from the table.
- `import_table_entities`: inserts multiple rows of data from a data frame into the table.
For the functions that insert or update rows, you can specify the data either as an R list/data frame or as JSON text.
```r
insert_table_entity(tab, list(
RowKey="row1",
PartitionKey="partition1",
firstname="Bill",
lastname="Gates"
))
get_table_entity(tab, "row1", "partition1")
# specifying the entity as JSON text instead of a list
update_table_entity(tab,
'{
"RowKey": "row1",
"PartitionKey": "partition1",
"firstname": "Satya",
"lastname": "Nadella
}')
# we can import to the same table as above: table storage doesn't enforce a schema
import_table_entities(tab, mtcars,
row_key=row.names(mtcars),
partition_key=as.character(mtcars$cyl))
list_table_entities(tab)
list_table_entities(tab, filter="firstname eq 'Satya'")
list_table_entities(tab, filter="RowKey eq 'Toyota Corolla'")
```
## Batch transactions
Notice how, with the exception of `import_table_entities`, all of the above entity functions work on a single row of data. Table storage provides a batch execution facility, which lets you bundle up single-row operations into a single transaction that will be executed atomically. In the jargon, this is known as an _entity group transaction_. `import_table_entities` is an example of an entity group transaction: it bundles up multiple rows of data into batch jobs, which is much more efficient than sending each row individually to the server.
Entity group transactions are subject to some limitations imposed by the REST API:
- All entities subject to operations as part of the transaction must have the same `PartitionKey` value.
- An entity can appear only once in the transaction, and only one operation may be performed against it.
- The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size (or 2MB for a Cosmos DB table endpoint).
The `create_table_operation`, `create_batch_transaction` and `do_batch_transaction` functions let you perform entity group transactions. Here is an example of a simple batch insert. The actual `import_table_entities` function is more complex as it can also handle multiple partition keys and more than 100 rows of data.
```r
ir <- subset(iris, Species == "setosa")
# property names must be valid C# variable names
names(ir) <- sub("\\.", "_", names(ir))
# create the PartitionKey and RowKey properties
ir$PartitionKey <- ir$Species
ir$RowKey <- sprintf("%03d", seq_len(nrow(ir)))
# generate the array of insert operations: 1 per row
ops <- lapply(seq_len(nrow(ir)), function(i)
create_table_operation(endp, "mytable", body=ir[i, ], http_verb="POST")))
# create a batch transaction and send it to the endpoint
bat <- create_batch_transaction(endp, ops)
do_batch_transaction(bat)
```
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/inst/doc/intro.Rmd |
---
title: "Introduction to AzureTableStor"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AzureTableStor}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
## Azure table storage
Azure table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design. Because table storage is schemaless, it's easy to adapt your data as the needs of your application evolve. Access to table storage data is fast and cost-effective for many types of applications, and is typically lower in cost than traditional SQL for similar volumes of data.
You can use table storage to store flexible datasets like user data for web applications, address books, device information, or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
AzureTableStor is an R interface to table storage, building on the functionality provided by the [AzureStor](https://github.com/Azure/AzureStor) package. The table storage service is available both as part of a storage account and via Azure Cosmos DB; AzureTableStor is able to work with either.
## Tables
AzureTableStor provides a `table_endpoint` function that is the analogue of AzureStor's `blob_endpoint`, `file_endpoint` and `adls_endpoint` functions. There are methods for retrieving, creating, listing and deleting tables within the endpoint.
```r
library(AzureTableStor)
# storage account endpoint
endp <- table_endpoint("https://mystorageacct.table.core.windows.net", key="mykey")
# Cosmos DB w/table API endpoint
endp <- table_endpoint("https://mycosmosdb.table.cosmos.azure.com:443", key="mykey")
create_storage_table(endp, "mytable")
list_storage_tables(endp)
tab <- storage_table(endp, "mytable")
```
## Entities
In table storage jargon, an _entity_ is a row in a table. The columns of the table are _properties_. Note that table storage does not enforce a schema; that is, individual entities in a table can have different properties. An entity is identified by its `RowKey` and `PartitionKey` properties, which must be unique for each entity.
AzureTableStor provides the following functions to work with data in a table:
- `insert_table_entity`: inserts a row into the table.
- `update_table_entity`: updates a row with new data, or inserts a new row if it doesn't already exist.
- `get_table_entity`: retrieves an individual row from the table.
- `delete_table_entity`: deletes a row from the table.
- `import_table_entities`: inserts multiple rows of data from a data frame into the table.
For the functions that insert or update rows, you can specify the data either as an R list/data frame or as JSON text.
```r
insert_table_entity(tab, list(
RowKey="row1",
PartitionKey="partition1",
firstname="Bill",
lastname="Gates"
))
get_table_entity(tab, "row1", "partition1")
# specifying the entity as JSON text instead of a list
update_table_entity(tab,
'{
"RowKey": "row1",
"PartitionKey": "partition1",
"firstname": "Satya",
"lastname": "Nadella
}')
# we can import to the same table as above: table storage doesn't enforce a schema
import_table_entities(tab, mtcars,
row_key=row.names(mtcars),
partition_key=as.character(mtcars$cyl))
list_table_entities(tab)
list_table_entities(tab, filter="firstname eq 'Satya'")
list_table_entities(tab, filter="RowKey eq 'Toyota Corolla'")
```
## Batch transactions
Notice how, with the exception of `import_table_entities`, all of the above entity functions work on a single row of data. Table storage provides a batch execution facility, which lets you bundle up single-row operations into a single transaction that will be executed atomically. In the jargon, this is known as an _entity group transaction_. `import_table_entities` is an example of an entity group transaction: it bundles up multiple rows of data into batch jobs, which is much more efficient than sending each row individually to the server.
Entity group transactions are subject to some limitations imposed by the REST API:
- All entities subject to operations as part of the transaction must have the same `PartitionKey` value.
- An entity can appear only once in the transaction, and only one operation may be performed against it.
- The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size (or 2MB for a Cosmos DB table endpoint).
The `create_table_operation`, `create_batch_transaction` and `do_batch_transaction` functions let you perform entity group transactions. Here is an example of a simple batch insert. The actual `import_table_entities` function is more complex as it can also handle multiple partition keys and more than 100 rows of data.
```r
ir <- subset(iris, Species == "setosa")
# property names must be valid C# variable names
names(ir) <- sub("\\.", "_", names(ir))
# create the PartitionKey and RowKey properties
ir$PartitionKey <- ir$Species
ir$RowKey <- sprintf("%03d", seq_len(nrow(ir)))
# generate the array of insert operations: 1 per row
ops <- lapply(seq_len(nrow(ir)), function(i)
create_table_operation(endp, "mytable", body=ir[i, ], http_verb="POST")))
# create a batch transaction and send it to the endpoint
bat <- create_batch_transaction(endp, ops)
do_batch_transaction(bat)
```
| /scratch/gouwar.j/cran-all/cranData/AzureTableStor/vignettes/intro.Rmd |
#' @import AzureRMR
NULL
#' @export
AzureRMR::build_template_definition
#' @export
AzureRMR::build_template_parameters
globalVariables(c("self", "pool"), "AzureVM")
# adding methods to classes in external package must go in .onLoad
.onLoad <- function(libname, pkgname)
{
add_sub_methods()
add_rg_methods()
add_defunct_methods()
options(azure_vm_minpoolsize=2)
options(azure_vm_maxpoolsize=10)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/AzureVM.R |
# documentation is separate from implementation because roxygen still doesn't know how to handle R6
#' List available VM sizes
#'
#' Method for the [AzureRMR::az_subscription] and [AzureRMR::az_resource_group] classes.
#'
#' @section Usage:
#' ```
#' ## R6 method for class 'az_subscription'
#' list_vm_sizes(location, name_only = FALSE)
#'
#' ## R6 method for class 'az_resource_group'
#' list_vm_sizes(name_only = FALSE)
#' ```
#' @section Arguments:
#' - `location`: For the subscription class method, the location/region for which to obtain available VM sizes.
#' - `name_only`: Whether to return only a vector of names, or all information on each VM size.
#'
#' @section Value:
#' If `name_only` is TRUE, a character vector of names, suitable for passing to `create_vm`. If FALSE, a data frame containing the following information for each VM size: the name, number of cores, OS disk size, resource disk size, memory, and maximum data disks.
#'
#' @seealso
#' [create_vm]
#'
#' @examples
#' \dontrun{
#'
#' sub <- AzureRMR::get_azure_login$
#' get_subscription("subscription_id")
#'
#' sub$list_vm_sizes("australiaeast")
#'
#' # same output as above
#' rg <- sub$create_resource_group("rgname", location="australiaeast")
#' rg$list_vm_sizes()
#'
#' }
#' @rdname list_vm_sizes
#' @aliases list_vm_sizes
#' @name list_vm_sizes
NULL
#' Get existing virtual machine(s)
#'
#' Method for the [AzureRMR::az_subscription] and [AzureRMR::az_resource_group] classes.
#'
#' @section Usage:
#' ```
#' ## R6 method for class 'az_subscription'
#' get_vm(name, resource_group = name)
#'
#' ## R6 method for class 'az_resource_group'
#' get_vm(name)
#'
#' ## R6 method for class 'az_subscription'
#' get_vm_scaleset(name, resource_group = name)
#'
#' ## R6 method for class 'az_resource_group'
#' get_vm_scaleset(name)
#'
#' ## R6 method for class 'az_resource_group')
#' get_vm_resource(name)
#' get_vm_scaleset_resource(name)
#' ```
#' @section Arguments:
#' - `name`: The name of the VM or scaleset.
#' - `resource_group`: For the `az_subscription` methods, the resource group in which `get_vm()` and `get_vm_scaleset()` will look for the VM or scaleset. Defaults to the VM name.
#'
#' @section Value:
#' For `get_vm()`, an object representing the VM deployment. This will include other resources besides the VM itself, such as the network interface, virtual network, etc.
#'
#' For `get_vm_scaleset()`, an object representing the scaleset deployment. Similarly to `get_vm()`, this includes other resources besides the scaleset.
#'
#' For `get_vm_resource()` and `get_vm_scaleset_resource()`, the VM or scaleset resource itself.
#'
#' @seealso
#' [az_vm_template], [az_vm_resource], [az_vmss_template], [az_vmss_resource] for the methods available for working with VMs and VM scalesets.
#'
#' [AzureRMR::az_subscription], [AzureRMR::az_resource_group]
#'
#' @examples
#' \dontrun{
#'
#' sub <- AzureRMR::get_azure_login()$
#' get_subscription("subscription_id")
#'
#' sub$get_vm("myvirtualmachine")
#' sub$get_vm_scaleset("myscaleset")
#'
#' rg <- sub$get_resource_group("rgname")
#' rg$get_vm("myothervirtualmachine")
#' rg$get_vm_scaleset("myotherscaleset")
#'
#' }
#' @rdname get_vm
#' @aliases get_vm get_vm_scaleset get_vm_resource get_vm_scaleset_resource
#' @name get_vm
NULL
#' Create a new virtual machine or scaleset of virtual machines
#'
#' Method for the [AzureRMR::az_subscription] and [AzureRMR::az_resource_group] classes.
#'
#' @section Usage:
#' ```
#' ## R6 method for class 'az_resource_group'
#' create_vm(name, login_user, size = "Standard_DS3_v2", config = "ubuntu_dsvm",
#' managed_identity = TRUE, datadisks = numeric(0), ...,
#' template, parameters, mode = "Incremental", wait = TRUE)
#'
#' ## R6 method for class 'az_subscription'
#' create_vm(name, ..., resource_group = name, location)
#'
#' ## R6 method for class 'az_resource_group'
#' create_vm_scaleset(name, login_user, instances, size = "Standard_DS1_v2",
#' config = "ubuntu_dsvm_ss", ...,
#' template, parameters, mode = "Incremental", wait = TRUE)
#'
#' ## R6 method for class 'az_subscription'
#' create_vm_scaleset(name, ..., resource_group = name, location)
#' ```
#' @section Arguments:
#' - `name`: The name of the VM or scaleset.
#' - `location`: For the subscription methods, the location for the VM or scaleset. Use the `list_locations()` method of the `AzureRMR::az_subscription` class to see what locations are available.
#' - `resource_group`: For the subscription methods, the resource group in which to place the VM or scaleset. Defaults to a new resource group with the same name as the VM.
#' - `login_user`: The details for the admin login account. An object of class `user_config`, obtained by a call to the `user_config` function.
#' - `size`: The VM (instance) size. Use the [list_vm_sizes] method to see what sizes are available.
#' - `config`: The VM or scaleset configuration. See 'Details' below for how to specify this. The default is to use an Ubuntu Data Science Virtual Machine.
#' - `managed_identity`: For `create_vm`, whether the VM should have a managed identity attached.
#' - `datadisks`: Any data disks to attach to the VM or scaleset. See 'Details' below.
#' - `instances`: For `create_vm_scaleset`, the initial number of instances in the scaleset.
#' - `...` For the subscription methods, any of the other arguments listed here, which will be passed to the resource group method. For the resource group method, additional arguments to pass to the VM/scaleset configuration functions [vm_config] and [vmss_config]. See the examples below.
#' - `template,parameters`: The template definition and parameters to deploy. By default, these are constructed from the values of the other arguments, but you can supply your own template and/or parameters as well.
#' - `wait`: Whether to wait until the deployment is complete.
#' - `mode`: The template deployment mode. If "Complete", any existing resources in the resource group will be deleted.
#'
#' @section Details:
#' These methods deploy a template to create a new virtual machine or scaleset.
#'
#' The `config` argument can be specified in the following ways:
#' - As the name of a supplied VM or scaleset configuration, like "ubuntu_dsvm" or "ubuntu_dsvm_ss". AzureVM comes with a number of supplied configurations to deploy commonly used images, which can be seen at [vm_config] and [vmss_config]. Any arguments in `...` will be passed to the configuration, allowing you to customise the deployment.
#' - As a call to the `vm_config` or `vmss_config` functions, to deploy a custom VM image.
#' - As an object of class `vm_config` or `vmss_config`.
#'
#' The data disks for the VM can be specified as either a vector of numeric disk sizes in GB, or as a list of `datadisk_config` objects, created via calls to the `datadisk_config` function. Currently, AzureVM only supports creating data disks at deployment time for single VMs, not scalesets.
#'
#' You can also supply your own template definition and parameters for deployment, via the `template` and `parameters` arguments. See [AzureRMR::az_template] for information how to create templates.
#'
#' The `AzureRMR::az_subscription` methods will by default create the VM in _exclusive_ mode, meaning a new resource group is created solely to hold the VM or scaleset. This simplifies managing the VM considerably; in particular deleting the resource group will also automatically delete all the deployed resources.
#'
#' @section Value:
#' For `create_vm`, an object of class `az_vm_template` representing the created VM. For `create_vm_scaleset`, an object of class `az_vmss_template` representing the scaleset.
#'
#' @seealso
#' [az_vm_template], [az_vmss_template]
#'
#' [vm_config], [vmss_config], [user_config], [datadisk_config]
#'
#' [AzureRMR::az_subscription], [AzureRMR::az_resource_group],
#' [Data Science Virtual Machine](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/)
#'
#' @examples
#' \dontrun{
#'
#' sub <- AzureRMR::get_azure_login()$
#' get_subscription("subscription_id")
#'
#' # default Ubuntu 18.04 VM:
#' # SSH key login, Standard_DS3_v2, publicly accessible via SSH
#' sub$create_vm("myubuntuvm", user_config("myname", "~/.ssh/id_rsa.pub"),
#' location="australiaeast")
#'
#' # Windows Server 2019, with a 500GB datadisk attached, not publicly accessible
#' sub$create_vm("mywinvm", user_config("myname", password="Use-strong-passwords!"),
#' size="Standard_DS4_v2", config="windows_2019", datadisks=500, ip=NULL,
#' location="australiaeast")
#'
#' # Ubuntu DSVM, GPU-enabled
#' sub$create_vm("mydsvm", user_config("myname", "~/.ssh/id_rsa.pub"), size="Standard_NC12",
#' config="ubuntu_dsvm_ss",
#' location="australiaeast")
#'
#' ## custom VM configuration: Windows 10 Pro 1903 with data disks
#' ## this assumes you have a valid Win10 desktop license
#' user <- user_config("myname", password="Use-strong-passwords!")
#' image <- image_config(
#' publisher="MicrosoftWindowsDesktop",
#' offer="Windows-10",
#' sku="19h1-pro"
#' )
#' datadisks <- list(
#' datadisk_config(250, type="Premium_LRS"),
#' datadisk_config(1000, type="Standard_LRS")
#' )
#' nsg <- nsg_config(
#' list(nsg_rule_allow_rdp)
#' )
#' config <- vm_config(
#' image=image,
#' keylogin=FALSE,
#' datadisks=datadisks,
#' nsg=nsg,
#' properties=list(licenseType="Windows_Client")
#' )
#' sub$create_vm("mywin10vm", user, size="Standard_DS2_v2", config=config,
#' location="australiaeast")
#'
#'
#' # default Ubuntu scaleset:
#' # load balancer and autoscaler enabled, Standard_DS1_v2
#' sub$create_vm_scaleset("mydsvmss", user_config("myname", "~/.ssh/id_rsa.pub"),
#' instances=5,
#' location="australiaeast"))
#'
#' # Ubuntu DSVM scaleset with public GPU-enabled instances, no load balancer or autoscaler
#' sub$create_vm_scaleset("mydsvmss", user_config("myname", "~/.ssh/id_rsa.pub"),
#' instances=5, size="Standard_NC12", config="ubuntu_dsvm_ss",
#' options=scaleset_options(public=TRUE),
#' load_balancer=NULL, autoscaler=NULL,
#' location="australiaeast")
#'
#' # RHEL scaleset, allow http/https access
#' sub$create_vm_scaleset("myrhelss", user_config("myname", "~/.ssh/id_rsa.pub"),
#' instances=5, config="rhel_8_ss",
#' nsg=nsg_config(list(nsg_rule_allow_http, nsg_rule_allow_https)),
#' location="australiaeast")
#'
#' # Large Debian scaleset, using low-priority (spot) VMs
#' # need to set the instance size to something that supports low-pri
#' sub$create_vm_scaleset("mydebss", user_config("myname", "~/.ssh/id_rsa.pub"),
#' instances=50, size="Standard_DS3_v2", config="debian_9_backports_ss",
#' options=scaleset_options(priority="spot", large_scaleset=TRUE),
#' location="australiaeast")
#'
#'
#' ## VM and scaleset in the same resource group and virtual network
#' # first, create the resgroup
#' rg <- sub$create_resource_group("rgname", "australiaeast")
#'
#' # create the master
#' rg$create_vm("mastervm", user_config("myname", "~/.ssh/id_rsa.pub"))
#'
#' # get the vnet resource
#' vnet <- rg$get_resource(type="Microsoft.Network/virtualNetworks", name="mastervm-vnet")
#'
#' # create the scaleset
#' rg$create_vm_scaleset("slavess", user_config("myname", "~/.ssh/id_rsa.pub"),
#' instances=5, vnet=vnet, nsg=NULL, load_balancer=NULL, autoscaler=NULL)
#'
#' }
#' @rdname create_vm
#' @aliases create_vm create_vm_scaleset
#' @name create_vm
NULL
#' Delete virtual machine
#'
#' Method for the [AzureRMR::az_subscription] and [AzureRMR::az_resource_group] classes.
#'
#' @docType class
#' @section Usage:
#' ```
#' ## R6 method for class 'az_resource_group'
#' delete_vm(name, confirm = TRUE, free_resources = TRUE)
#'
#' ## R6 method for class 'az_subscription'
#' delete_vm(name, confirm = TRUE, free_resources = TRUE,
#' resource_group = name)
#'
#' ## R6 method for class 'az_resource_group'
#' delete_vm_scaleset(name, confirm = TRUE, free_resources = TRUE)
#'
#' ## R6 method for class 'az_subscription'
#' delete_vm_scaleset(name, confirm = TRUE, free_resources = TRUE,
#' resource_group = name)
#' ```
#' @section Arguments:
#' - `name`: The name of the VM or scaleset.
#' - `confirm`: Whether to confirm the delete.
#' - `free_resources`: If this was a deployed template, whether to free all resources created during the deployment process.
#' - `resource_group`: For the `AzureRMR::az_subscription` method, the resource group containing the VM or scaleset.
#'
#' @section Details:
#' For the subscription methods, deleting the VM or scaleset will also delete its resource group.
#'
#' @seealso
#' [create_vm], [az_vm_template], [az_vm_resource],
#' [AzureRMR::az_subscription], [AzureRMR::az_resource_group]
#'
#' @examples
#' \dontrun{
#'
#' sub <- AzureRMR::get_azure_login()$
#' get_subscription("subscription_id")
#'
#' sub$delete_vm("myvm")
#' sub$delete_vm_scaleset("myscaleset")
#'
#' }
#' @rdname delete_vm
#' @aliases delete_vm delete_vm_scaleset
#' @name delete_vm
NULL
# extend subscription methods
add_sub_methods <- function()
{
az_subscription$set("public", "list_vm_sizes", overwrite=TRUE,
function(location, name_only=FALSE)
{
provider <- "Microsoft.Compute"
path <- "locations"
api_version <- self$get_provider_api_version(provider, path)
op <- file.path("providers", provider, path, location, "vmSizes")
res <- call_azure_rm(self$token, self$id, op, api_version=api_version)
if(!name_only)
do.call(rbind, lapply(res$value, data.frame, stringsAsFactors=FALSE))
else sapply(res$value, `[[`, "name")
})
az_subscription$set("public", "create_vm", overwrite=TRUE,
function(name, ..., resource_group=name, location)
{
if(!is_resource_group(resource_group))
{
rgnames <- names(self$list_resource_groups())
if(resource_group %in% rgnames)
{
resource_group <- self$get_resource_group(resource_group)
mode <- "Incremental"
}
else
{
message("Creating resource group '", resource_group, "'")
resource_group <- self$create_resource_group(resource_group, location=location)
mode <- "Complete"
}
}
else mode <- "Incremental" # if passed a resource group object, assume it already exists in Azure
res <- try(resource_group$create_vm(name, ..., mode=mode))
if(inherits(res, "try-error") && mode == "Complete")
{
resource_group$delete(confirm=FALSE)
stop("Unable to create VM", call.=FALSE)
}
res
})
az_subscription$set("public", "create_vm_scaleset", overwrite=TRUE,
function(name, ..., resource_group=name, location)
{
if(!is_resource_group(resource_group))
{
rgnames <- names(self$list_resource_groups())
if(resource_group %in% rgnames)
{
resource_group <- self$get_resource_group(resource_group)
mode <- "Incremental"
}
else
{
message("Creating resource group '", resource_group, "'")
resource_group <- self$create_resource_group(resource_group, location=location)
mode <- "Complete"
}
}
else mode <- "Incremental" # if passed a resource group object, assume it already exists in Azure
res <- try(resource_group$create_vm_scaleset(name, ..., mode=mode))
if(inherits(res, "try-error") && mode == "Complete")
{
resource_group$delete(confirm=FALSE)
stop("Unable to create VM scaleset", call.=FALSE)
}
res
})
az_subscription$set("public", "get_vm", overwrite=TRUE,
function(name, resource_group=name)
{
if(!is_resource_group(resource_group))
resource_group <- self$get_resource_group(resource_group)
resource_group$get_vm(name)
})
az_subscription$set("public", "get_vm_scaleset", overwrite=TRUE,
function(name, resource_group=name)
{
if(!is_resource_group(resource_group))
resource_group <- self$get_resource_group(resource_group)
resource_group$get_vm_scaleset(name)
})
az_subscription$set("public", "delete_vm", overwrite=TRUE,
function(name, confirm=TRUE, free_resources=TRUE, resource_group=name)
{
if(!is_resource_group(resource_group))
resource_group <- self$get_resource_group(resource_group)
resource_group$delete_vm(name, confirm=confirm, free_resources=free_resources)
})
az_subscription$set("public", "delete_vm_scaleset", overwrite=TRUE,
function(name, confirm=TRUE, free_resources=TRUE, resource_group=name)
{
if(!is_resource_group(resource_group))
resource_group <- self$get_resource_group(resource_group)
resource_group$delete_vm_scaleset(name, confirm=confirm, free_resources=free_resources)
})
}
# extend resource group methods
add_rg_methods <- function()
{
az_resource_group$set("public", "create_vm", overwrite=TRUE,
function(name, login_user, size="Standard_DS3_v2", config="ubuntu_20.04",
managed_identity=TRUE, datadisks=numeric(0),
..., template, parameters, mode="Incremental", wait=TRUE)
{
stopifnot(inherits(login_user, "user_config"))
if(is.character(config))
config <- get(config, getNamespace("AzureVM"))
if(is.function(config))
config <- config(!is_empty(login_user$key), managed_identity, datadisks, ...)
stopifnot(inherits(config, "vm_config"))
if(missing(template))
template <- build_template_definition(config)
if(missing(parameters))
parameters <- build_template_parameters(config, name, login_user, size)
az_vm_template$new(self$token, self$subscription, self$name, name,
template=template, parameters=parameters, mode=mode, wait=wait)
})
az_resource_group$set("public", "create_vm_scaleset", overwrite=TRUE,
function(name, login_user, instances, size="Standard_DS1_v2", config="ubuntu_20.04_ss",
..., template, parameters, mode="Incremental", wait=TRUE)
{
stopifnot(inherits(login_user, "user_config"))
if(is.character(config))
config <- get(config, getNamespace("AzureVM"))
if(is.function(config))
config <- config(...)
stopifnot(inherits(config, "vmss_config"))
if(missing(template))
template <- build_template_definition(config)
if(missing(parameters))
parameters <- build_template_parameters(config, name, login_user, size, instances)
az_vmss_template$new(self$token, self$subscription, self$name, name,
template=template, parameters=parameters, mode=mode, wait=wait)
})
az_resource_group$set("public", "get_vm", overwrite=TRUE,
function(name)
{
az_vm_template$new(self$token, self$subscription, self$name, name)
})
az_resource_group$set("public", "get_vm_scaleset", overwrite=TRUE,
function(name)
{
az_vmss_template$new(self$token, self$subscription, self$name, name)
})
az_resource_group$set("public", "delete_vm", overwrite=TRUE,
function(name, confirm=TRUE, free_resources=TRUE)
{
self$get_vm(name)$delete(confirm=confirm, free_resources=free_resources)
})
az_resource_group$set("public", "delete_vm_scaleset", overwrite=TRUE,
function(name, confirm=TRUE, free_resources=TRUE)
{
self$get_vm_scaleset(name)$delete(confirm=confirm, free_resources=free_resources)
})
az_resource_group$set("public", "list_vm_sizes", overwrite=TRUE,
function(name_only=FALSE)
{
az_subscription$
new(self$token, parms=list(subscriptionId=self$subscription))$
list_vm_sizes(self$location, name_only=name_only)
})
az_resource_group$set("public", "get_vm_resource", overwrite=TRUE,
function(name)
{
az_vm_resource$new(self$token, self$subscription, self$name,
type="Microsoft.Compute/virtualMachines", name=name)
})
az_resource_group$set("public", "get_vm_scaleset_resource", overwrite=TRUE,
function(name)
{
az_vmss_resource$new(self$token, self$subscription, self$name,
type="Microsoft.Compute/virtualMachineScalesets", name=name)
})
}
#' Defunct methods
#'
#' @section Usage:
#' ```
#' get_vm_cluster(...)
#' create_vm_cluster(...)
#' delete_vm_cluster(...)
#' ```
#' These methods for the `az_subscription` and `az_resource_group` classes are defunct in AzureVM 2.0. To work with virtual machine clusters, call the [get_vm_scaleset], [create_vm_scaleset] and [delete_vm_scaleset] methods instead.
#' @rdname defunct
#' @name defunct
#' @aliases get_vm_cluster create_vm_cluster delete_vm_cluster
NULL
add_defunct_methods <- function()
{
az_subscription$set("public", "get_vm_cluster", overwrite=TRUE, function(...)
{
.Defunct(msg="The 'get_vm_cluster' method is defunct.\nUse 'get_vm_scaleset' instead.")
})
az_subscription$set("public", "create_vm_cluster", overwrite=TRUE, function(...)
{
.Defunct(msg="The 'create_vm_cluster' method is defunct.\nUse 'create_vm_scaleset' instead.")
})
az_subscription$set("public", "delete_vm_cluster", overwrite=TRUE, function(...)
{
.Defunct(msg="The 'delete_vm_cluster' method is defunct.\nUse 'delete_vm_scaleset' instead.")
})
az_resource_group$set("public", "get_vm_cluster", overwrite=TRUE, function(...)
{
.Defunct(msg="The 'get_vm_cluster' method is defunct.\nUse 'get_vm_scaleset' instead.")
})
az_resource_group$set("public", "create_vm_cluster", overwrite=TRUE, function(...)
{
.Defunct(msg="The 'create_vm_cluster' method is defunct.\nUse 'create_vm_scaleset' instead.")
})
az_resource_group$set("public", "delete_vm_cluster", overwrite=TRUE, function(...)
{
.Defunct(msg="The 'delete_vm_cluster' method is defunct.\nUse 'delete_vm_scaleset' instead.")
})
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/add_methods.R |
#' Autoscaler configuration
#'
#' @param profiles A list of autoscaling profiles, each obtained via a call to `autoscaler_profile`.
#' @param ... Other named arguments that will be treated as resource properties.
#' @param name For `autoscaler_profile`, a name for the profile.
#' @param minsize,maxsize,default For `autoscaler_profile`, the minimum, maximum and default number of instances.
#' @param scale_out,scale_in For `autoscaler_profile`, the CPU usage (a fraction between 0 and 1) at which to scale out and in, respectively.
#' @param interval For `autoscaler_profile`, The interval between samples, in ISO 8601 format. The default is 1 minute.
#' @param window For `autoscaler_profile`, the window width over which to compute the percentage CPU. The default is 5 minutes.
#'
#' @seealso
#' [create_vm_scaleset], [vmss_config]
#' @examples
#' autoscaler_config()
#' autoscaler_config(list(
#' autoscaler_profile(minsize=2, maxsize=50, scale_out=0.9, scale_in=0.1)
#' ))
#' @export
autoscaler_config <- function(profiles=list(autoscaler_profile()), ...)
{
props <- list(profiles=profiles, ...)
structure(list(properties=props), class="as_config")
}
build_resource_fields.as_config <- function(config, ...)
{
config$properties$profiles <- lapply(config$properties$profiles, unclass)
utils::modifyList(as_default, config)
}
add_template_variables.as_config <- function(config, ...)
{
name <- "[concat(parameters('vmName'), '-as')]"
id <- "[resourceId('Microsoft.Insights/autoscaleSettings', variables('asName'))]"
ref <- "[concat('Microsoft.Insights/autoscaleSettings/', variables('asName'))]"
capacity <- "[mul(int(parameters('instanceCount')), 10)]"
scaleval <- "[max(div(int(parameters('instanceCount')), 5), 1)]"
list(asName=name, asId=id, asRef=ref, asMaxCapacity=capacity, asScaleValue=scaleval)
}
#' @rdname autoscaler_config
#' @export
autoscaler_profile <- function(name="Profile", minsize=1, maxsize=NA, default=NA, scale_out=0.75, scale_in=0.25,
interval="PT1M", window="PT5M")
{
if(is.na(maxsize))
maxsize <- "[variables('asMaxCapacity')]"
if(is.na(default))
default <- "[parameters('instanceCount')]"
capacity <- list(minimum=minsize, maximum=maxsize, default=default)
trigger <- list(
metricName="Percentage CPU",
metricNamespace="",
metricResourceUri="[variables('vmId')]",
timeGrain=interval,
timeWindow=window,
statistic="Average",
timeAggregation="Average"
)
action <- list(
type="ChangeCount",
value="[variables('asScaleValue')]",
cooldown=interval
)
rule_out <- list(metricTrigger=trigger, scaleAction=action)
rule_out$metricTrigger$operator <- "GreaterThan"
rule_out$metricTrigger$threshold <- round(scale_out * 100)
rule_out$scaleAction$direction <- "Increase"
rule_in <- list(metricTrigger=trigger, scaleAction=action)
rule_in$metricTrigger$operator <- "LessThan"
rule_in$metricTrigger$threshold <- round(scale_in * 100)
rule_in$scaleAction$direction <- "Decrease"
prof <- list(
name=name,
capacity=capacity,
rules=list(rule_out, rule_in)
)
structure(prof, class="as_profile_config")
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/autoscaler_config.R |
#' Virtual machine resource class
#'
#' Class representing a virtual machine resource. In general, the methods in this class should not be called directly, nor should objects be directly instantiated from it. Use the `az_vm_template` class for interacting with VMs instead.
#'
#' @docType class
#' @section Methods:
#' The following methods are available, in addition to those provided by the [AzureRMR::az_resource] class:
#' - `start(wait=TRUE)`: Start the VM. By default, wait until the startup process is complete.
#' - `stop(deallocate=TRUE, wait=FALSE)`: Stop the VM. By default, deallocate it as well.
#' - `restart(wait=TRUE)`: Restart the VM.
#' - `run_deployed_command(command, parameters, script)`: Run a PowerShell command on the VM.
#' - `run_script(script, parameters)`: Run a script on the VM. For a Linux VM, this will be a shell script; for a Windows VM, a PowerShell script. Pass the script as a character vector.
#' - `sync_vm_status()`: Check the status of the VM.
#' - `resize(size, deallocate=FALSE, wait=FALSE)`: Resize the VM. Optionally stop and deallocate it first (may sometimes be necessary).
#' - `redeploy()`: Redeploy the VM.
#' - `reimage()`: Reimage the VM.
#' - `get_public_ip_address(nic=1, config=1)`: Get the public IP address of the VM. Returns NA if the VM is shut down, or is not publicly accessible.
#' - `get_private_ip_address(nic=1, config=1)`: Get the private IP address of the VM.
#' - `get_public_ip_resource(nic=1, config=1)`: Get the Azure resource for the VM's public IP address.
#' - `get_nic(nic=1)`: Get the VM's network interface resource.
#' - `get_vnet(nic=1, config=1)`: Get the VM's virtual network resource.
#' - `get_nsg(nic=1, config=1)`: Get the VM's network security group resource. Note that an NSG can be attached to either the VM's network interface or to its virtual network subnet; if there is an NSG attached to both, this method returns a list containing the two NSG resource objects.
#' - `get_disk(disk="os")`: Get a managed disk resource attached to the VM. The `disk` argument can be "os" for the OS disk, or a number indicating the LUN of a data disk. AzureVM only supports managed disks.
#' - `add_extension(publisher, type, version, settings=list(), protected_settings=list(), key_vault_settings=list())`: Add an extension to the VM.
#' - `do_vm_operation(...)`: Carry out an arbitrary operation on the VM resource. See the `do_operation` method of the [AzureRMR::az_resource] class for more details.
#'
#' @seealso
#' [AzureRMR::az_resource], [get_vm_resource], [az_vm_template]
#'
#' [VM API reference](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines)
#' @format An R6 object of class `az_vm_resource`, inheriting from `AzureRMR::az_resource`.
#' @export
az_vm_resource <- R6::R6Class("az_vm_resource", inherit=AzureRMR::az_resource,
public=list(
status=NULL,
# need to record these since AzureRMR can't currently get API versions for subresources
nic_api_version="2020-05-01",
ip_api_version="2020-05-01",
sync_vm_status=function()
{
get_status <- function(lst)
{
status <- lapply(lst, `[[`, "code")
names(status) <- sapply(status, function(x) sub("/.*$", "", x))
vapply(status, function(x) sub("^[^/]+/", "", x), FUN.VALUE=character(1))
}
self$sync_fields()
res <- self$do_operation("instanceView")
self$status <- get_status(res$statuses)
self$status
},
start=function(wait=TRUE)
{
self$do_operation("start", http_verb="POST")
if(wait) private$wait_for_success("start")
},
restart=function(wait=TRUE)
{
status <- self$sync_vm_status()
self$do_operation("restart", http_verb="POST")
if(wait) private$wait_for_success("restart")
},
stop=function(deallocate=TRUE, wait=FALSE)
{
self$do_operation("powerOff", http_verb="POST")
if(deallocate)
self$do_operation("deallocate", http_verb="POST")
if(wait)
{
for(i in 1:100)
{
Sys.sleep(5)
self$sync_vm_status()
if(length(self$status) < 2 || self$status[2] %in% c("stopped", "deallocated"))
break
}
if(length(self$status) == 2 && !(self$status[2] %in% c("stopped", "deallocated")))
stop("Unable to shut down VM", call.=FALSE)
}
},
resize=function(size, deallocate=FALSE, wait=FALSE)
{
if(deallocate)
self$stop(deallocate=TRUE, wait=TRUE)
properties <- list(hardwareProfile=list(vmSize=size))
self$do_operation(http_verb="PATCH",
body=list(properties=properties), encode="json")
if(wait)
{
size <- tolower(size)
for(i in 1:100)
{
self$sync_vm_status()
newsize <- tolower(properties$hardwareProfile$vmSize)
if(newsize == size)
break
Sys.sleep(5)
}
if(newsize != size)
stop("Unable to resize VM", call.=FALSE)
}
},
run_deployed_command=function(command, parameters=NULL, script=NULL)
{
body <- list(commandId=command, parameters=parameters, script=script)
self$do_operation("runCommand", body=body, encode="json", http_verb="POST")
},
run_script=function(script, parameters=NULL)
{
os_prof_names <- names(self$properties$osProfile)
windows <- any(grepl("windows", os_prof_names, ignore.case=TRUE))
linux <- any(grepl("linux", os_prof_names, ignore.case=TRUE))
if(!windows && !linux)
stop("Unknown VM operating system", call.=FALSE)
cmd <- if(windows) "RunPowerShellScript" else "RunShellScript"
self$run_deployed_command(cmd, as.list(parameters), as.list(script))
},
get_public_ip_address=function(nic=1, config=1)
{
ip <- self$get_public_ip_resource(nic, config)
if(is.null(ip) || is.null(ip$properties$ipAddress))
return(NA_character_)
ip$properties$ipAddress
},
get_private_ip_address=function(nic=1, config=1)
{
nic <- self$get_nic(nic)
nic$properties$ipConfigurations[[config]]$properties$privateIPAddress
},
get_public_ip_resource=function(nic=1, config=1)
{
nic <- self$get_nic(nic)
ip_id <- nic$properties$ipConfigurations[[config]]$properties$publicIPAddress$id
if(is_empty(ip_id))
return(NULL)
az_resource$new(self$token, self$subscription, id=ip_id, api_version=self$ip_api_version)
},
get_nic=function(nic=1)
{
nic_id <- self$properties$networkProfile$networkInterfaces[[nic]]$id
if(is_empty(nic_id))
stop("Network interface resource not found", call.=FALSE)
az_resource$new(self$token, self$subscription, id=nic_id, api_version=self$nic_api_version)
},
get_vnet=function(nic=1, config=1)
{
nic <- self$get_nic(nic)
subnet_id <- nic$properties$ipConfigurations[[config]]$properties$subnet$id
vnet_id <- sub("/subnets/[^/]+$", "", subnet_id)
az_resource$new(self$token, self$subscription, id=vnet_id)
},
get_nsg=function(nic=1, config=1)
{
vnet <- self$get_vnet(nic, config)
nic <- self$get_nic(nic)
nic_nsg_id <- nic$properties$networkSecurityGroup$id
nic_nsg <- if(!is.null(nic_nsg_id))
az_resource$new(self$token, self$subscription, id=nic_nsg_id)
else NULL
# go through list of subnets, find the one where this VM is located
found <- FALSE
nic_id <- tolower(nic$id)
for(sn in vnet$properties$subnets)
{
nics <- tolower(unlist(sn$properties$ipConfigurations))
if(any(grepl(nic_id, nics, fixed=TRUE)))
{
found <- TRUE
break
}
}
if(!found)
stop("Error locating subnet for this network configuration", call.=FALSE)
subnet_nsg_id <- sn$properties$networkSecurityGroup$id
subnet_nsg <- if(!is.null(subnet_nsg_id))
az_resource$new(self$token, self$subscription, id=subnet_nsg_id)
else NULL
if(is.null(nic_nsg) && is.null(subnet_nsg))
NULL
else if(is.null(nic_nsg) && !is.null(subnet_nsg))
subnet_nsg
else if(!is.null(nic_nsg) && is.null(subnet_nsg))
nic_nsg
else(list(nic_nsg, subnet_nsg))
},
get_disk=function(disk="os")
{
id <- if(disk == "os")
self$properties$storageProfile$osDisk$managedDisk$id
else if(is.numeric(disk))
self$properties$storageProfile$dataDisks[[disk]]$managedDisk$id
else stop("Invalid disk argument: should be 'os', or the data disk number", call.=FALSE)
az_resource$new(self$token, self$subscription, id=id)
},
add_extension=function(publisher, type, version, settings=list(),
protected_settings=list(), key_vault_settings=list())
{
name <- gsub("[[:punct:]]", "", type)
op <- file.path("extensions", name)
props <- list(
publisher=publisher,
type=type,
typeHandlerVersion=version,
autoUpgradeMinorVersion=TRUE,
settings=settings
)
if(!is_empty(protected_settings))
props$protectedSettings <- protected_settings
if(!is_empty(key_vault_settings))
props$protectedSettingsFromKeyVault <- key_vault_settings
self$do_operation(op, body=list(properties=props), http_verb="PUT")
},
redeploy=function()
{
self$do_operation("redeploy", http_verb="POST")
message("Redeployment started. Call the sync_vm_status() method to check progress.")
},
reimage=function()
{
self$do_operation("reimage", http_verb="POST")
message("Reimage started. Call the sync_vm_status() method to check progress.")
},
print=function(...)
{
cat("<Azure virtual machine resource ", self$name, ">\n", sep="")
osProf <- names(self$properties$osProfile)
os <- if(any(grepl("linux", osProf))) "Linux" else if(any(grepl("windows", osProf))) "Windows" else "<unknown>"
prov_status <- if(is_empty(self$status))
"<unknown>"
else paste0(names(self$status), "=", self$status, collapse=", ")
cat(" Operating system:", os, "\n")
cat(" Status:", prov_status, "\n")
cat("---\n")
cat(AzureRMR::format_public_fields(self,
exclude=c("subscription", "resource_group", "type", "name", "status", "is_synced",
"nic_api_version", "ip_api_version")))
cat(AzureRMR::format_public_methods(self))
invisible(NULL)
}
),
private=list(
init_and_deploy=function(...)
{
stop("Do not use 'az_vm_resource' to create a new VM", call.=FALSE)
},
wait_for_success=function(op)
{
for(i in 1:1000)
{
Sys.sleep(5)
self$sync_vm_status()
if(length(self$status) == 2 &&
# self$status[1] == "succeeded" &&
self$status[2] == "running")
break
}
if(length(self$status) < 2 ||
# self$status[1] != "succeeded" ||
self$status[2] != "running")
stop("Unable to ", op, " VM", call.=FALSE)
}
))
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/az_vm_resource.R |
#' Virtual machine template class
#'
#' Class representing a virtual machine deployment template. This class keeps track of all resources that are created as part of deploying a VM, and exposes methods for managing them.
#'
#' @docType class
#' @section Fields:
#' The following fields are exposed, in addition to those provided by the [AzureRMR::az_template] class.
#' - `dns_name`: The DNS name for the VM. Will be NULL if the VM is not publicly visible, or doesn't have a domain name assigned to its public IP address.
#' - `identity`: The managed identity details for the VM. Will be NULL if the VM doesn't have an identity assigned.
#' @section Methods:
#' The following methods are available, in addition to those provided by the [AzureRMR::az_template] class.
#' - `start(wait=TRUE)`: Start the VM. By default, wait until the startup process is complete.
#' - `stop(deallocate=TRUE, wait=FALSE)`: Stop the VM. By default, deallocate it as well.
#' - `restart(wait=TRUE)`: Restart the VM.
#' - `run_deployed_command(command, parameters, script)`: Run a PowerShell command on the VM.
#' - `run_script(script, parameters)`: Run a script on the VM. For a Linux VM, this will be a shell script; for a Windows VM, a PowerShell script. Pass the script as a character vector.
#' - `sync_vm_status()`: Check the status of the VM.
#' - `resize(size, deallocate=FALSE, wait=FALSE)`: Resize the VM. Optionally stop and deallocate it first (may sometimes be necessary).
#' - `redeploy()`: Redeploy the VM.
#' - `reimage()`: Reimage the VM.
#' - `get_public_ip_address(nic=1, config=1)`: Get the public IP address of the VM. Returns NA if the VM is stopped, or is not publicly accessible.
#' - `get_private_ip_address(nic=1, config=1)`: Get the private IP address of the VM.
#' - `get_public_ip_resource(nic=1, config=1)`: Get the Azure resource for the VM's public IP address.
#' - `get_nic(nic=1)`: Get the VM's network interface resource.
#' - `get_vnet(nic=1, config=1)`: Get the VM's virtual network resource.
#' - `get_nsg(nic=1, config=1)`: Get the VM's network security group resource. Note that an NSG can be attached to either the VM's network interface or to its virtual network subnet; if there is an NSG attached to both, this method returns a list containing the two NSG resource objects.
#' - `get_disk(disk="os")`: Get a managed disk resource attached to the VM. The `disk` argument can be "os" for the OS disk, or a number indicating the LUN of a data disk. AzureVM only supports managed disks.
#' - `add_extension(publisher, type, version, settings=list(), protected_settings=list(), key_vault_settings=list())`: Add an extension to the VM.
#' - `do_vm_operation(...)`: Carries out an arbitrary operation on the VM resource. See the `do_operation` method of the [AzureRMR::az_resource] class for more details.
#'
#' Many of these methods are actually provided by the [az_vm_resource] class, and propagated to the template as active bindings.
#'
#' @details
#' A single virtual machine in Azure is actually a collection of resources, including any and all of the following.
#' - Network interface (Azure resource type `Microsoft.Network/networkInterfaces`)
#' - Network security group (Azure resource type `Microsoft.Network/networkSecurityGroups`)
#' - Virtual network (Azure resource type `Microsoft.Network/virtualNetworks`)
#' - Public IP address (Azure resource type `Microsoft.Network/publicIPAddresses`)
#' - The VM itself (Azure resource type `Microsoft.Compute/virtualMachines`)
#'
#' By wrapping the deployment template used to create these resources, the `az_vm_template` class allows managing them all as a single entity.
#'
#' @seealso
#' [AzureRMR::az_template], [create_vm], [get_vm], [delete_vm]
#'
#' [VM API reference](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines)
#'
#' @examples
#' \dontrun{
#'
#' sub <- AzureRMR::get_azure_login()$
#' get_subscription("subscription_id")
#'
#' vm <- sub$get_vm("myvm")
#'
#' vm$identity
#'
#' vm$start()
#' vm$get_private_ip_address()
#' vm$get_public_ip_address()
#'
#' vm$run_script("echo hello world! > /tmp/hello.txt")
#'
#' vm$stop()
#' vm$get_private_ip_address()
#' vm$get_public_ip_address() # NA, assuming VM has a dynamic IP address
#'
#' vm$resize("Standard_DS13_v2")
#' vm$sync_vm_status()
#'
#' }
#' @format An R6 object of class `az_vm_template`, inheriting from `AzureRMR::az_template`.
#' @export
az_vm_template <- R6::R6Class("az_vm_template", inherit=az_template,
public=list(
dns_name=NULL,
initialize=function(token, subscription, resource_group, name, ..., wait=TRUE)
{
super$initialize(token, subscription, resource_group, name, ..., wait=wait)
if(wait)
{
private$vm <- az_vm_resource$new(self$token, self$subscription, id=self$properties$outputs$vmResource$value)
# get the hostname/IP address for the VM
outputs <- unlist(self$properties$outputResources)
ip_id <- grep("publicIPAddresses/.+$", outputs, ignore.case=TRUE, value=TRUE)
if(!is_empty(ip_id))
{
ip <- az_resource$new(self$token, self$subscription, id=ip_id)
self$dns_name <- ip$properties$dnsSettings$fqdn
}
}
else message("Deployment started. Call the sync_vm_status() method to track the status of the deployment.")
},
delete=function(confirm=TRUE, free_resources=TRUE)
{
# must reorder template output resources so that freeing resources will work
private$reorder_for_delete()
super$delete(confirm=confirm, free_resources=free_resources)
},
print=function(...)
{
cat("<Azure virtual machine ", self$name, ">\n", sep="")
osProf <- names(private$vm$properties$osProfile)
os <- if(any(grepl("linux", osProf))) "Linux" else if(any(grepl("windows", osProf))) "Windows" else "<unknown>"
exclusive <- self$properties$mode == "Complete"
status <- if(is_empty(private$vm$status))
"<unknown>"
else paste0(names(private$vm$status), "=", private$vm$status, collapse=", ")
cat(" Operating system:", os, "\n")
cat(" Exclusive resource group:", exclusive, "\n")
cat(" Domain name:", self$dns_name, "\n")
cat(" Status:", status, "\n")
cat("---\n")
exclude <- c("subscription", "resource_group", "name", "dns_name")
cat(AzureRMR::format_public_fields(self, exclude=exclude))
cat(AzureRMR::format_public_methods(self))
invisible(NULL)
}
),
# propagate resource methods up to template
active=list(
identity=function()
private$vm$identity,
sync_vm_status=function()
private$vm$sync_vm_status,
start=function()
private$vm$start,
stop=function()
private$vm$stop,
restart=function()
private$vm$restart,
add_extension=function()
private$vm$add_extension,
resize=function()
private$vm$resize,
run_deployed_command=function()
private$vm$run_deployed_command,
run_script=function()
private$vm$run_script,
get_public_ip_address=function()
private$vm$get_public_ip_address,
get_private_ip_address=function()
private$vm$get_private_ip_address,
get_public_ip_resource=function()
private$vm$get_public_ip_resource,
get_nic=function()
private$vm$get_nic,
get_vnet=function()
private$vm$get_vnet,
get_nsg=function()
private$vm$get_nsg,
get_disk=function()
private$vm$get_disk,
redeploy=function()
private$vm$redeploy,
reimage=function()
private$vm$reimage,
do_vm_operation=function()
private$vm$do_operation
),
private=list(
vm=NULL,
reorder_for_delete=function()
{
is_type <- function(id, type)
{
grepl(type, id, fixed=TRUE)
}
# insert managed disks into deletion queue
stor <- private$vm$properties$storageProfile
managed_disks <- c(
stor$osDisk$managedDisk$id,
lapply(stor$dataDisks, function(x) x$managedDisk$id)
)
outs <- unique(c(unlist(self$properties$outputResources), unlist(managed_disks)))
new_order <- sapply(outs, function(id)
{
if(is_type(id, "Microsoft.Compute/virtualMachines")) 1
else if(is_type(id, "Microsoft.Compute/disks")) 2
else if(is_type(id, "Microsoft.Network/networkInterfaces")) 3
else if(is_type(id, "Microsoft.Network/virtualNetworks")) 4
else if(is_type(id, "Microsoft.Network/publicIPAddresses")) 5
else if(is_type(id, "Microsoft.Network/networkSecurityGroups")) 6
else 0 # delete all other resources first
})
outs <- outs[order(new_order)]
self$properties$outputResources <- lapply(outs, function(x) list(id=x))
}
))
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/az_vm_template.R |
#' Virtual machine scaleset resource class
#'
#' Class representing a virtual machine scaleset resource. In general, the methods in this class should not be called directly, nor should objects be directly instantiated from it. Use the `az_vmss_template` class for interacting with scalesets instead.
#'
#' @docType class
#' @section Methods:
#' The following methods are available, in addition to those provided by the [AzureRMR::az_template] class.
#' - `sync_vmss_status`: Check the status of the scaleset.
#' - `list_instances()`: Return a list of [az_vm_resource] objects, one for each VM instance in the scaleset. Note that if the scaleset has a load balancer attached, the number of instances will vary depending on the load.
#' - `get_instance(id)`: Return a specific VM instance in the scaleset.
#' - `start(id=NULL, wait=FALSE)`: Start the scaleset. In this and the other methods listed here, `id` can be an optional character vector of instance IDs; if supplied, only carry out the operation for those instances.
#' - `restart(id=NULL, wait=FALSE)`: Restart the scaleset.
#' - `stop(deallocate=TRUE, id=NULL, wait=FALSE)`: Stop the scaleset.
#' - `get_public_ip_address()`: Get the public IP address of the scaleset (technically, of the load balancer). If the scaleset doesn't have a load balancer attached, returns NA.
#' - `get_vm_public_ip_addresses(id=NULL, nic=1, config=1)`: Get the public IP addresses for the instances in the scaleset. Returns NA for the instances that are stopped or not publicly accessible.
#' - `get_vm_private_ip_addresses(id=NULL, nic=1, config=1)`: Get the private IP addresses for the instances in the scaleset.
#' - `get_vnet(nic=1, config=1)`: Get the scaleset's virtual network resource.
#' - `get_nsg(nic=1, config=1)`: Get the scaleset's network security group resource.
#' - `run_deployed_command(command, parameters=NULL, script=NULL, id=NULL)`: Run a PowerShell command on the instances in the scaleset.
#' - `run_script(script, parameters=NULL, id=NULL)`: Run a script on the VM. For a Linux VM, this will be a shell script; for a Windows VM, a PowerShell script. Pass the script as a character vector.
#' - `reimage(id=NULL, datadisks=FALSE)`: Reimage the instances in the scaleset. If `datadisks` is TRUE, reimage any attached data disks as well.
#' - `redeploy(id=NULL)`: Redeploy the instances in the scaleset.
#' - `mapped_vm_operation(..., id=NULL)`: Carry out an arbitrary operation on the instances in the scaleset. See the `do_operation` method of the [AzureRMR::az_resource] class for more details.
#' - `add_extension(publisher, type, version, settings=list(), protected_settings=list(), key_vault_settings=list())`: Add an extension to the scaleset.
#' - `do_vmss_operation(...)` Carry out an arbitrary operation on the scaleset resource (as opposed to the instances in the scaleset).
#'
#' @details
#' A single virtual machine scaleset in Azure is actually a collection of resources, including any and all of the following.
#' - Network security group (Azure resource type `Microsoft.Network/networkSecurityGroups`)
#' - Virtual network (Azure resource type `Microsoft.Network/virtualNetworks`)
#' - Load balancer (Azure resource type `Microsoft.Network/loadBalancers`)
#' - Public IP address (Azure resource type `Microsoft.Network/publicIPAddresses`)
#' - Autoscaler (Azure resource type `Microsoft.Insights/autoscaleSettings`)
#' - The scaleset itself (Azure resource type `Microsoft.Compute/virtualMachineScaleSets`)
#'
#' By wrapping the deployment template used to create these resources, the `az_vmss_template` class allows managing them all as a single entity.
#'
#' @section Instance operations:
#' AzureVM has the ability to parallelise scaleset instance operations using a background process pool provided by AzureRMR. This can lead to significant speedups when working with scalesets with high instance counts. The pool is created automatically the first time that it is required, and remains persistent for the session. You can control the size of the process pool with the `azure_vm_minpoolsize` and `azure_vm_maxpoolsize` options, which have default values 2 and 10 respectively.
#'
#' The `id` argument lets you specify a subset of instances on which to carry out an operation. This can be a character vector of instance IDs; a list of instance objects such as returned by `list_instances`; or a single instance object. The default (NULL) is to carry out the operation on all instances.
#'
#' @seealso
#' [AzureRMR::az_resource], [get_vm_scaleset_resource], [az_vmss_template], [AzureRMR::init_pool]
#'
#' [VM scaleset API reference](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets)
#' @format An R6 object of class `az_vmss_resource`, inheriting from `AzureRMR::az_resource`.
#' @export
az_vmss_resource <- R6::R6Class("az_vmss_resource", inherit=AzureRMR::az_resource,
public=list(
status=NULL,
sync_vmss_status=function(id=NULL)
{
instances <- self$list_instances()
if(!is.null(id))
instances <- instances[as.character(id)]
statuses <- private$vm_map(id, function(res)
{
status <- res$sync_vm_status()
if(length(status) < 2)
status <- c(status, NA)
status
})
self$status <- data.frame(id=names(statuses), do.call(rbind, statuses), stringsAsFactors=FALSE)
colnames(self$status) <- c("id", "ProvisioningState", "PowerState")
row.names(self$status) <- NULL
self$status
},
list_instances=function()
{
lst <- named_list(get_paged_list(self$do_operation("virtualMachines")), "instanceId")
lapply(lst, private$make_vm_resource)
},
get_instance=function(id)
{
obj <- self$do_operation(file.path("virtualMachines", id))
private$make_vm_resource(obj)
},
start=function(id=NULL, wait=FALSE)
{
body <- if(!is.null(id)) list(instanceIds=I(as.character(id))) else NULL
self$do_operation("start", body=body, http_verb="POST")
if(wait)
{
for(i in 1:100)
{
Sys.sleep(5)
status <- self$sync_vmss_status(id)
if(all(status$PowerState == "running"))
break
}
if(!all(status$PowerState == "running"))
stop("Unable to start VM scaleset", call.=FALSE)
}
},
restart=function(id=NULL, wait=FALSE)
{
body <- if(!is.null(id)) list(instanceIds=I(as.character(id))) else NULL
self$do_operation("restart", body=body, http_verb="POST")
if(wait)
{
for(i in 1:100)
{
Sys.sleep(5)
status <- self$sync_vmss_status(id)
if(all(status$PowerState == "running"))
break
}
if(!all(status$PowerState == "running"))
stop("Unable to restart VM scaleset", call.=FALSE)
}
},
stop=function(deallocate=TRUE, id=NULL, wait=FALSE)
{
body <- if(!is.null(id)) list(instanceIds=I(as.character(id))) else NULL
self$do_operation("powerOff", body=body, http_verb="POST")
if(deallocate)
self$do_operation("deallocate", body=body, http_verb="POST")
if(wait)
{
for(i in 1:100)
{
Sys.sleep(5)
status <- self$sync_vm_status(id)
if(all(status$PowerState %in% c("stopped", "deallocated")))
break
}
if(length(self$status) == 2 && !(self$status[2] %in% c("stopped", "deallocated")))
stop("Unable to shut down VM", call.=FALSE)
}
},
get_vm_public_ip_addresses=function(id=NULL, nic=1, config=1)
{
unlist(private$vm_map(id, function(vm) vm$get_public_ip_address(nic, config)))
},
get_vm_private_ip_addresses=function(id=NULL, nic=1, config=1)
{
unlist(private$vm_map(id, function(vm) vm$get_private_ip_address(nic, config)))
},
get_vnet=function(nic=1, config=1)
{
subnet_id <- self$properties$
virtualMachineProfile$networkProfile$networkInterfaceConfigurations[[nic]]$properties$
ipConfigurations[[config]]$properties$
subnet$id
vnet_id <- sub("/subnets/[^/]+$", "", subnet_id)
az_resource$new(self$token, self$subscription, id=vnet_id)
},
get_nsg=function(nic=1, config=1)
{
vnet <- self$get_vnet(nic, config)
# go through list of subnets, find the one where this scaleset's instances are located
found <- FALSE
vmss_id <- tolower(self$id)
for(sn in vnet$properties$subnets)
{
nics <- tolower(unlist(sn$properties$ipConfigurations))
if(any(grepl(vmss_id, nics, fixed=TRUE)))
{
found <- TRUE
break
}
}
if(!found)
stop("Unable to find subnet for this network configuration", call.=FALSE)
subnet_nsg_id <- sn$properties$networkSecurityGroup$id
if(!is.null(subnet_nsg_id))
az_resource$new(self$token, self$subscription, id=subnet_nsg_id)
else NULL
},
run_deployed_command=function(command, parameters=NULL, script=NULL, id=NULL)
{
private$vm_map(id, function(vm) vm$run_deployed_command(command, parameters, script))
},
run_script=function(script, parameters=NULL, id=NULL)
{
private$vm_map(id, function(vm) vm$run_script(script, parameters))
},
reimage=function(id=NULL, datadisks=FALSE)
{
op <- if(datadisks) "reimageall" else "reimage"
if(is.null(id))
self$do_operation(op, http_verb="POST")
else private$vm_map(id, function(vm) vm$do_operation(op, http_verb="POST"))
message("Reimage started. Call the sync_vmss_status() method to check progress.")
},
redeploy=function(id=NULL)
{
if(is.null(id))
self$do_operation("redeploy", http_verb="POST")
else private$vm_map(id, function(vm) vm$do_operation("redeploy", http_verb="POST"))
message("Redeployment started. Call the sync_vmss_status() method to check progress.")
},
mapped_vm_operation=function(..., id=NULL)
{
private$vm_map(id, function(vm) vm$do_operation(...))
},
add_extension=function(publisher, type, version, settings=list(),
protected_settings=list(), key_vault_settings=list())
{
name <- gsub("[[:punct:]]", "", type)
op <- file.path("extensions", name)
props <- list(
publisher=publisher,
type=type,
typeHandlerVersion=version,
autoUpgradeMinorVersion=TRUE,
settings=settings
)
if(!is_empty(protected_settings))
props$protectedSettings <- protected_settings
if(!is_empty(key_vault_settings))
props$protectedSettingsFromKeyVault <- key_vault_settings
self$do_operation(op, body=list(properties=props), http_verb="PUT")
},
print=function(...)
{
cat("<Azure virtual machine scaleset resource ", self$name, ">\n", sep="")
osProf <- names(self$properties$virtualMachineProfile$osProfile)
os <- if(any(grepl("linux", osProf))) "Linux" else if(any(grepl("windows", osProf))) "Windows" else "<unknown>"
cat(" Operating system:", os, "\n")
cat(" Status:\n")
if(is_empty(self$status))
cat(" <unknown>\n")
else
{
status <- head(self$status)
row.names(status) <- paste0(" ", row.names(status))
print(status)
if(nrow(self$status) > nrow(status))
cat(" ...\n")
}
cat("---\n")
exclude <- c("subscription", "resource_group", "type", "name", "status")
cat(AzureRMR::format_public_fields(self, exclude=exclude))
cat(AzureRMR::format_public_methods(self))
invisible(NULL)
}
),
private=list(
init_and_deploy=function(...)
{
stop("Do not use 'az_vmss_resource' to create a new VM scaleset", call.=FALSE)
},
make_vm_resource=function(params)
{
params$instanceId <- NULL
obj <- az_vm_resource$new(self$token, self$subscription, deployed_properties=params)
# some subresource API versions don't match between VMs and VM scalesets
obj$nic_api_version <- "2020-06-01"
obj$ip_api_version <- "2020-06-01"
# make type and name useful
obj$type <- self$type
obj$name <- file.path(self$name, "virtualMachines", basename(params$id))
obj
},
vm_map=function(id, f)
{
vms <- if(is.null(id))
self$list_instances()
else if(is.list(id) && all(sapply(id, is_vm_resource)))
id
else if(is_vm_resource(id))
structure(list(id), names=basename(id$id))
else self$list_instances()[as.character(id)]
if(length(vms) < 2 || getOption("azure_vm_maxpoolsize") == 0)
return(lapply(vms, f))
minsize <- getOption("azure_vm_minpoolsize")
maxsize <- getOption("azure_vm_maxpoolsize")
size <- min(max(length(vms), minsize), maxsize)
init_pool(size)
pool_lapply(vms, f)
}
))
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/az_vmss_resource.R |
#' Virtual machine scaleset (cluster) template class
#'
#' Class representing a virtual machine scaleset deployment template. This class keeps track of all resources that are created as part of deploying a scaleset, and exposes methods for managing them.
#'
#' @docType class
#' @section Fields:
#' The following fields are exposed, in addition to those provided by the [AzureRMR::az_template] class.
#' - `dns_name`: The DNS name for the scaleset (technically, the name for its load balancer). Will be NULL if the scaleset is not publicly visible, or doesn't have a load balancer attached.
#' - `identity`: The managed identity details for the scaleset. Will be NULL if the scaleset doesn't have an identity assigned.
#' @section Methods:
#' The following methods are available, in addition to those provided by the [AzureRMR::az_template] class.
#' - `sync_vmss_status`: Check the status of the scaleset.
#' - `list_instances()`: Return a list of [az_vm_resource] objects, one for each VM instance in the scaleset. Note that if the scaleset has an autoscaler attached, the number of instances will vary depending on the load.
#' - `get_instance(id)`: Return a specific VM instance in the scaleset.
#' - `start(id=NULL, wait=FALSE)`: Start the scaleset. In this and the other methods listed here, `id` can be an optional character vector of instance IDs; if supplied, only carry out the operation for those instances.
#' - `restart(id=NULL, wait=FALSE)`: Restart the scaleset.
#' - `stop(deallocate=TRUE, id=NULL, wait=FALSE)`: Stop the scaleset.
#' - `get_public_ip_address()`: Get the public IP address of the scaleset (technically, of the load balancer). If the scaleset doesn't have a load balancer attached, returns NA.
#' - `get_vm_public_ip_addresses(id=NULL, nic=1, config=1)`: Get the public IP addresses for the instances in the scaleset. Returns NA for the instances that are stopped or not publicly accessible.
#' - `get_vm_private_ip_addresses(id=NULL, nic=1, config=1)`: Get the private IP addresses for the instances in the scaleset.
#' - `get_public_ip_resource()`: Get the Azure resource for the load balancer's public IP address.
#' - `get_vnet(nic=1, config=1)`: Get the scaleset's virtual network resource.
#' - `get_nsg(nic=1, config=1)`: Get the scaleset's network security group resource.
#' - `get_load_balancer()`: Get the scaleset's load balancer resource.
#' - `get_autoscaler()`: Get the scaleset's autoscaler resource.
#' - `run_deployed_command(command, parameters=NULL, script=NULL, id=NULL)`: Run a PowerShell command on the instances in the scaleset.
#' - `run_script(script, parameters=NULL, id=NULL)`: Run a script on the VM. For a Linux VM, this will be a shell script; for a Windows VM, a PowerShell script. Pass the script as a character vector.
#' - `reimage(id=NULL, datadisks=FALSE)`: Reimage the instances in the scaleset. If `datadisks` is TRUE, reimage any attached data disks as well.
#' - `redeploy(id=NULL)`: Redeploy the instances in the scaleset.
#' - `mapped_vm_operation(..., id=NULL)`: Carry out an arbitrary operation on the instances in the scaleset. See the `do_operation` method of the [AzureRMR::az_resource] class for more details.
#' - `add_extension(publisher, type, version, settings=list(), protected_settings=list(), key_vault_settings=list())`: Add an extension to the scaleset.
#' - `do_vmss_operation(...)` Carry out an arbitrary operation on the scaleset resource (as opposed to the instances in the scaleset).
#'
#' Many of these methods are actually provided by the [az_vmss_resource] class, and propagated to the template as active bindings.
#'
#' @details
#' A virtual machine scaleset in Azure is actually a collection of resources, including any and all of the following.
#' - Network security group (Azure resource type `Microsoft.Network/networkSecurityGroups`)
#' - Virtual network (Azure resource type `Microsoft.Network/virtualNetworks`)
#' - Load balancer (Azure resource type `Microsoft.Network/loadBalancers`)
#' - Public IP address (Azure resource type `Microsoft.Network/publicIPAddresses`)
#' - Autoscaler (Azure resource type `Microsoft.Insights/autoscaleSettings`)
#' - The scaleset itself (Azure resource type `Microsoft.Compute/virtualMachineScaleSets`)
#'
#' By wrapping the deployment template used to create these resources, the `az_vmss_template` class allows managing them all as a single entity.
#'
#' @section Instance operations:
#' AzureVM has the ability to parallelise scaleset instance operations using a background process pool provided by AzureRMR. This can lead to significant speedups when working with scalesets with high instance counts. The pool is created automatically the first time that it is required, and remains persistent for the session. You can control the size of the process pool with the `azure_vm_minpoolsize` and `azure_vm_maxpoolsize` options, which have default values 2 and 10 respectively.
#'
#' The `id` argument lets you specify a subset of instances on which to carry out an operation. This can be a character vector of instance IDs; a list of instance objects such as returned by `list_instances`; or a single instance object. The default (NULL) is to carry out the operation on all instances.
#'
#' @seealso
#' [AzureRMR::az_template], [create_vm_scaleset], [get_vm_scaleset], [delete_vm_scaleset], [AzureRMR::init_pool]
#'
#' [VM scaleset API reference](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets)
#'
#' @examples
#' \dontrun{
#'
#' sub <- AzureRMR::get_azure_login()$
#' get_subscription("subscription_id")
#'
#' vmss <- sub$get_vm_scaleset("myscaleset")
#'
#' vmss$identity
#'
#' vmss$get_public_ip_address() # NA if the scaleset doesn't have a load balancer
#'
#' vmss$start()
#' vmss$get_vm_private_ip_addresses()
#' vmss$get_vm_public_ip_addresses() # NA if scaleset nodes are not publicly visible
#'
#' instances <- vmss$list_instances()
#' first <- instances[1]
#' vmss$run_script("echo hello world! > /tmp/hello.txt", id=first)
#' vmss$stop(id=first)
#' vmss$reimage(id=first)
#'
#' vmss$sync_vmss_status()
#'
#' }
#' @format An R6 object of class `az_vmss_template`, inheriting from `AzureRMR::az_template`.
#' @export
az_vmss_template <- R6::R6Class("az_vmss_template", inherit=az_template,
public=list(
dns_name=NULL,
initialize=function(token, subscription, resource_group, name, ..., wait=TRUE)
{
super$initialize(token, subscription, resource_group, name, ..., wait=wait)
if(wait)
{
private$vmss <- az_vmss_resource$new(self$token, self$subscription,
id=self$properties$outputs$vmResource$value)
# get the hostname/IP address for the VM
outputs <- unlist(self$properties$outputResources)
ip_id <- grep("publicIPAddresses/.+$", outputs, ignore.case=TRUE, value=TRUE)
if(!is_empty(ip_id))
{
ip <- az_resource$new(self$token, self$subscription, id=ip_id)
self$dns_name <- ip$properties$dnsSettings$fqdn
}
}
else message("Deployment started. Call the sync_vmss_status() method to track the status of the deployment.")
},
delete=function(confirm=TRUE, free_resources=TRUE)
{
# must reorder template output resources so that freeing resources will work
private$reorder_for_delete()
super$delete(confirm=confirm, free_resources=free_resources)
},
print=function(...)
{
cat("<Azure virtual machine scaleset ", self$name, ">\n", sep="")
osProf <- names(private$vmss$properties$virtualMachineProfile$osProfile)
os <- if(any(grepl("linux", osProf))) "Linux" else if(any(grepl("windows", osProf))) "Windows" else "<unknown>"
exclusive <- self$properties$mode == "Complete"
cat(" Operating system:", os, "\n")
cat(" Exclusive resource group:", exclusive, "\n")
cat(" Domain name:", self$dns_name, "\n")
cat(" Status:\n")
if(is_empty(private$vmss$status))
cat(" <unknown>\n")
else
{
status <- head(private$vmss$status)
row.names(status) <- paste0(" ", row.names(status))
print(status)
if(nrow(private$vmss$status) > nrow(status))
cat(" ...\n")
}
cat("---\n")
exclude <- c("subscription", "resource_group", "name", "dns_name")
cat(AzureRMR::format_public_fields(self, exclude=exclude))
cat(AzureRMR::format_public_methods(self))
invisible(NULL)
},
get_public_ip_address=function()
{
ip <- self$get_public_ip_resource()
if(!is.null(ip))
ip$properties$ipAddress
else NA_character_
},
get_public_ip_resource=function()
{
outputs <- unlist(self$properties$outputResources)
ip_id <- grep("publicIPAddresses/.+$", outputs, ignore.case=TRUE, value=TRUE)
if(is_empty(ip_id))
NULL
else az_resource$new(self$token, self$subscription, id=ip_id)
},
get_load_balancer=function()
{
outputs <- unlist(self$properties$outputResources)
lb_id <- grep("loadBalancers/.+$", outputs, ignore.case=TRUE, value=TRUE)
if(is_empty(lb_id))
NULL
else az_resource$new(self$token, self$subscription, id=lb_id)
},
get_autoscaler=function()
{
outputs <- unlist(self$properties$outputResources)
as_id <- grep("autoscaleSettings/.+$", outputs, ignore.case=TRUE, value=TRUE)
if(is_empty(as_id))
NULL
else az_resource$new(self$token, self$subscription, id=as_id)
}
),
# propagate resource methods up to template
active=list(
identity=function()
private$vmss$identity,
sync_vmss_status=function()
private$vmss$sync_vmss_status,
list_instances=function()
private$vmss$list_instances,
get_instance=function()
private$vmss$get_instance,
start=function()
private$vmss$start,
stop=function()
private$vmss$stop,
restart=function()
private$vmss$restart,
get_vm_public_ip_addresses=function()
private$vmss$get_vm_public_ip_addresses,
get_vm_private_ip_addresses=function()
private$vmss$get_vm_private_ip_addresses,
get_vnet=function()
private$vmss$get_vnet,
get_nsg=function()
private$vmss$get_nsg,
run_deployed_command=function()
private$vmss$run_deployed_command,
run_script=function()
private$vmss$run_script,
reimage=function()
private$vmss$reimage,
redeploy=function()
private$vmss$redeploy,
mapped_vm_operation=function()
private$vmss$mapped_vm_operation,
add_extension=function()
private$vmss$add_extension,
do_vmss_operation=function()
private$vmss$do_operation
),
private=list(
vmss=NULL,
reorder_for_delete=function()
{
is_type <- function(id, type)
{
grepl(type, id, fixed=TRUE)
}
new_order <- sapply(self$properties$outputResources, function(x)
{
id <- x$id
if(is_type(id, "Microsoft.Compute/virtualMachineScaleSets")) 1
else if(is_type(id, "Microsoft.Compute/disks")) 2
else if(is_type(id, "Microsoft.Insights/autoscaleSettings")) 3
else if(is_type(id, "Microsoft.Network/loadBalancers")) 4
else if(is_type(id, "Microsoft.Network/publicIPAddresses")) 5
else if(is_type(id, "Microsoft.Network/virtualNetworks")) 6
else if(is_type(id, "Microsoft.Network/networkSecurityGroups")) 7
else 0
})
self$properties$outputResources <- self$properties$outputResources[order(new_order)]
}
))
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/az_vmss_template.R |
#' Build template definition and parameters
#'
#' @param config An object of class `vm_config` or `vmss_config` representing a virtual machine or scaleset deployment.
#' @param name The VM or scaleset name. Will also be used for the domain name label, if a public IP address is included in the deployment.
#' @param login_user An object of class `user_config` representing the login details for the admin user account on the VM.
#' @param size The VM (instance) size.
#' @param ... Unused.
#'
#' @details
#' These are methods for the generics defined in the AzureRMR package.
#'
#' @return
#' Objects of class `json`, which are JSON character strings representing the deployment template and its parameters.
#'
#' @seealso
#' [create_vm], [vm_config], [vmss_config]
#'
#' @examples
#'
#' vm <- ubuntu_18.04()
#' build_template_definition(vm)
#' build_template_parameters(vm, "myubuntuvm",
#' user_config("username", "~/.ssh/id_rsa.pub"), "Standard_DS3_v2")
#'
#' @rdname build_template
#' @aliases build_template
#' @export
build_template_definition.vm_config <- function(config, ...)
{
tpl <- list(
`$schema`="http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
contentVersion="1.0.0.0",
parameters=add_template_parameters(config),
variables=add_template_variables(config),
resources=add_template_resources(config),
outputs=tpl_outputs_default
)
jsonlite::prettify(jsonlite::toJSON(tpl, auto_unbox=TRUE, null="null"))
}
#' @rdname build_template
#' @export
build_template_definition.vmss_config <- function(config, ...)
{
tpl <- list(
`$schema`="http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
contentVersion="1.0.0.0",
parameters=add_template_parameters(config),
variables=add_template_variables(config),
resources=add_template_resources(config),
outputs=tpl_outputs_default
)
jsonlite::prettify(jsonlite::toJSON(tpl, auto_unbox=TRUE, null="null"))
}
#' @rdname build_template
#' @export
build_template_parameters.vm_config <- function(config, name, login_user, size, ...)
{
add_parameters <- function(...)
{
new_params <- lapply(list(...), function(obj) list(value=obj))
params <<- c(params, new_params)
}
stopifnot(inherits(login_user, "user_config"))
params <- list()
add_parameters(vmName=name, vmSize=size, adminUsername=login_user$user)
if(config$keylogin && !is_empty(login_user$key))
add_parameters(sshKeyData=login_user$key)
else add_parameters(adminPassword=login_user$pwd)
if(inherits(config$image, "image_marketplace"))
add_parameters(
imagePublisher=config$image$publisher,
imageOffer=config$image$offer,
imageSku=config$image$sku,
imageVersion=config$image$version
)
else add_parameters(imageId=config$image$id)
# add datadisks to params
if(!is_empty(config$datadisks))
{
# fixup datadisk LUNs and names
for(i in seq_along(config$datadisks))
{
config$datadisks[[i]]$vm_spec$lun <- i - 1
diskname <- config$datadisks[[i]]$vm_spec$name
if(!is.null(diskname))
{
newdiskname <- paste(name, diskname, i, sep="_")
config$datadisks[[i]]$res_spec$name <- newdiskname
config$datadisks[[i]]$vm_spec$name <- newdiskname
}
}
disk_res_spec <- lapply(config$datadisks, `[[`, "res_spec")
null <- sapply(disk_res_spec, is.null)
add_parameters(
dataDisks=lapply(config$datadisks, `[[`, "vm_spec"),
dataDiskResources=disk_res_spec[!null]
)
}
jsonlite::prettify(jsonlite::toJSON(params, auto_unbox=TRUE, null="null"))
}
#' @param instances For `vmss_config`, the number of (initial) instances in the VM scaleset.
#' @rdname build_template
#' @export
build_template_parameters.vmss_config <- function(config, name, login_user, size, instances, ...)
{
add_parameters <- function(...)
{
new_params <- lapply(list(...), function(obj) list(value=obj))
params <<- c(params, new_params)
}
stopifnot(inherits(login_user, "user_config"))
params <- list()
add_parameters(vmName=name, vmSize=size, instanceCount=instances, adminUsername=login_user$user)
if(config$options$keylogin && !is_empty(login_user$key))
add_parameters(sshKeyData=login_user$key)
else add_parameters(adminPassword=login_user$pwd)
if(inherits(config$image, "image_marketplace"))
add_parameters(
imagePublisher=config$image$publisher,
imageOffer=config$image$offer,
imageSku=config$image$sku,
imageVersion=config$image$version
)
else add_parameters(imageId=config$image$id)
do.call(add_parameters, config$options$params)
# add datadisks to params
if(!is_empty(config$datadisks))
{
# fixup datadisk for scaleset
for(i in seq_along(config$datadisks))
{
config$datadisks[[i]]$vm_spec$lun <- i - 1
if(config$datadisks[[i]]$vm_spec$createOption == "attach")
{
config$datadisks[[i]]$vm_spec$createOption <- "empty"
config$datadisks[[i]]$vm_spec$diskSizeGB <- config$datadisks[[i]]$res_spec$diskSizeGB
config$datadisks[[i]]$vm_spec$storageAccountType <- config$datadisks[[i]]$res_spec$sku
}
diskname <- config$datadisks[[i]]$vm_spec$name
if(!is.null(diskname))
{
newdiskname <- paste(name, diskname, i, sep="_")
config$datadisks[[i]]$res_spec$name <- newdiskname
config$datadisks[[i]]$vm_spec$name <- newdiskname
}
}
disk_res_spec <- lapply(config$datadisks, `[[`, "res_spec")
null <- sapply(disk_res_spec, is.null)
add_parameters(
dataDisks=lapply(config$datadisks, `[[`, "vm_spec"),
dataDiskResources=disk_res_spec[!null]
)
}
jsonlite::prettify(jsonlite::toJSON(params, auto_unbox=TRUE, null="null"))
}
add_template_parameters <- function(config, ...)
{
UseMethod("add_template_parameters")
}
add_template_variables <- function(config, ...)
{
UseMethod("add_template_variables")
}
add_template_variables.character <- function(config, type, ...)
{
# assume this is a resource ID
resname <- basename(config)
varnames <- paste0(type, c("Name", "Id"))
structure(list(resname, config), names=varnames)
}
add_template_variables.az_resource <- function(config, type, ...)
{
varnames <- paste0(type, c("Name", "Id"))
vars <- list(config$name, config$id)
# a bit hackish, should fully objectify
if(type == "vnet") # if we have a vnet, extract the 1st subnet name
{
subnet <- config$properties$subnets[[1]]$name
subnet_id <- "[concat(variables('vnetId'), '/subnets/', variables('subnet'))]"
varnames <- c(varnames, "subnet", "subnetId")
structure(c(vars, subnet, subnet_id), names=varnames)
}
else if(type == "lb") # if we have a load balancer, extract component names
{
frontend <- config$properties$frontendIPConfigurations[[1]]$name
backend <- config$properties$backendAddressPools[[1]]$name
frontend_id <- "[concat(variables('lbId'), '/frontendIPConfigurations/', variables('lbFrontendName'))]"
backend_id <- "[concat(variables('lbId'), '/backendAddressPools/', variables('lbBackendName'))]"
varnames <- c(varnames, "lbFrontendName", "lbBackendName", "lbFrontendId", "lbBackendId")
structure(c(vars, frontend, backend, frontend_id, backend_id), names=varnames)
}
else structure(vars, names=varnames)
}
add_template_variables.NULL <- function(config, ...)
{
NULL
}
add_template_resources <- function(config, ...)
{
UseMethod("add_template_resources")
}
build_resource_fields <- function(config)
{
UseMethod("build_resource_fields")
}
build_resource_fields.list <- function(config, ...)
{
unclass(config)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/build_json.R |
# from AzureRMR
get_paged_list <- function(lst, token, next_link_name="nextLink", value_name="value")
{
res <- lst[[value_name]]
while(!is_empty(lst[[next_link_name]]))
{
lst <- call_azure_url(token, lst[[next_link_name]])
res <- c(res, lst[[value_name]])
}
res
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/get_paged_list.R |
# virtual machine images ========================
#' @rdname vm_config
#' @export
centos_7.5 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("OpenLogic", "CentOS", "7.5"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
centos_7.6 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("OpenLogic", "CentOS", "7.6"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
centos_8.1 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("OpenLogic", "CentOS", "8_1"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
# virtual machine scaleset images ===============
#' @rdname vmss_config
#' @export
centos_7.5_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("OpenLogic", "CentOS", "7.5"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
centos_7.6_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("OpenLogic", "CentOS", "7.6"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
centos_8.1_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("OpenLogic", "CentOS", "8_1"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/img_centos.R |
# virtual machine images ========================
#' @rdname vm_config
#' @export
debian_8_backports <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Credativ", "Debian", "8-backports"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
debian_9_backports <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Credativ", "Debian", "9-backports"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
debian_10_backports <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Debian", "Debian-10", "10-backports"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
debian_10_backports_gen2 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Debian", "Debian-10", "10-backports-gen2"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
# virtual machine scaleset images ===============
#' @rdname vmss_config
#' @export
debian_8_backports_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Credativ", "Debian", "8-backports"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
debian_9_backports_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Credativ", "Debian", "9-backports"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
debian_10_backports_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Debian", "Debian-10", "10-backports"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
debian_10_backports_gen2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Debian", "Debian-10", "10-backports-gen2"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/img_debian.R |
# virtual machine images ========================
#' @rdname vm_config
#' @export
ubuntu_dsvm <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh, nsg_rule_allow_jupyter, nsg_rule_allow_rstudio)),
...)
{
vm_config(image_config("microsoft-dsvm", "ubuntu-1804", "1804"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
ubuntu_dsvm_gen2 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh, nsg_rule_allow_jupyter, nsg_rule_allow_rstudio)),
...)
{
vm_config(image_config("microsoft-dsvm", "ubuntu-1804", "1804-gen2"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
windows_dsvm <- function(keylogin=FALSE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)), ...)
{
vm_config(image_config("microsoft-dsvm", "dsvm-win-2019", "server-2019"),
keylogin=FALSE, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
# virtual machine scaleset images ===============
#' @rdname vmss_config
#' @export
ubuntu_dsvm_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh, nsg_rule_allow_jupyter, nsg_rule_allow_rstudio)),
load_balancer=lb_config(rules=list(lb_rule_ssh, lb_rule_jupyter, lb_rule_rstudio),
probes=list(lb_probe_ssh, lb_probe_jupyter, lb_probe_rstudio)),
...)
{
vmss_config(image_config("microsoft-dsvm", "ubuntu-1804", "1804"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
ubuntu_dsvm_gen2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh, nsg_rule_allow_jupyter, nsg_rule_allow_rstudio)),
load_balancer=lb_config(rules=list(lb_rule_ssh, lb_rule_jupyter, lb_rule_rstudio),
probes=list(lb_probe_ssh, lb_probe_jupyter, lb_probe_rstudio)),
...)
{
vmss_config(image_config("microsoft-dsvm", "ubuntu-1804", "1804-gen2"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
windows_dsvm_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)),
load_balancer=lb_config(rules=list(lb_rule_rdp), probes=list(lb_probe_rdp)),
options=scaleset_options(keylogin=FALSE),
...)
{
options$keylogin <- FALSE
vmss_config(image_config("microsoft-dsvm", "dsvm-win-2019", "server-2019"),
options=options, datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/img_dsvm.R |
# virtual machine images ========================
#' @rdname vm_config
#' @export
rhel_7.6 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("RedHat", "RHEL", "7-RAW"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
rhel_8 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("RedHat", "RHEL", "8"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
rhel_8.1 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("RedHat", "RHEL", "8.1"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
rhel_8.1_gen2 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("RedHat", "RHEL", "81gen2"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
rhel_8.2 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("RedHat", "RHEL", "8.2"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
rhel_8.2_gen2 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("RedHat", "RHEL", "82gen2"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
# virtual machine scaleset images ===============
#' @rdname vmss_config
#' @export
rhel_7.6_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("RedHat", "RHEL", "7-RAW"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
rhel_8_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("RedHat", "RHEL", "8"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
rhel_8.1_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("RedHat", "RHEL", "8.1"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
rhel_8.1_gen2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("RedHat", "RHEL", "81gen2"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
rhel_8.2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("RedHat", "RHEL", "8.2"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
rhel_8.2_gen2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("RedHat", "RHEL", "82gen2"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/img_rhel.R |
# virtual machine images ========================
#' @rdname vm_config
#' @export
ubuntu_16.04 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Canonical", "UbuntuServer", "16.04-LTS"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
ubuntu_18.04 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Canonical", "UbuntuServer", "18.04-LTS"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
ubuntu_20.04 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Canonical", "0001-com-ubuntu-server-focal", "20_04-LTS"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
ubuntu_20.04_gen2 <- function(keylogin=TRUE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)), ...)
{
vm_config(image_config("Canonical", "0001-com-ubuntu-server-focal", "20_04-LTS-gen2"),
keylogin=keylogin, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
# virtual machine scaleset images ===============
#' @rdname vmss_config
#' @export
ubuntu_16.04_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Canonical", "UbuntuServer", "16.04-LTS"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
ubuntu_18.04_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Canonical", "UbuntuServer", "18.04-LTS"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
ubuntu_20.04_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Canonical", "0001-com-ubuntu-server-focal", "20_04-LTS"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
ubuntu_20.04_gen2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_ssh)),
load_balancer=lb_config(rules=list(lb_rule_ssh),
probes=list(lb_probe_ssh)),
...)
{
vmss_config(image_config("Canonical", "0001-com-ubuntu-server-focal", "20_04-LTS-gen2"),
datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/img_ubuntu.R |
# virtual machine images ========================
#' @rdname vm_config
#' @export
windows_2016 <- function(keylogin=FALSE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)), ...)
{
vm_config(image_config("MicrosoftWindowsServer", "WindowsServer", "2016-Datacenter"),
keylogin=FALSE, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
windows_2019 <- function(keylogin=FALSE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)), ...)
{
vm_config(image_config("MicrosoftWindowsServer", "WindowsServer", "2019-Datacenter"),
keylogin=FALSE, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
#' @rdname vm_config
#' @export
windows_2019_gen2 <- function(keylogin=FALSE, managed_identity=TRUE, datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)), ...)
{
vm_config(image_config("MicrosoftWindowsServer", "WindowsServer", "2019-Datacenter-gensecond"),
keylogin=FALSE, managed_identity=managed_identity, datadisks=datadisks, nsg=nsg, ...)
}
# virtual machine scaleset images ===============
#' @rdname vmss_config
#' @export
windows_2016_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)),
load_balancer=lb_config(rules=list(lb_rule_rdp),
probes=list(lb_probe_rdp)),
options=scaleset_options(keylogin=FALSE),
...)
{
options$keylogin <- FALSE
vmss_config(image_config("MicrosoftWindowsServer", "WindowsServer", "2016-Datacenter"),
options=options, datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
windows_2019_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)),
load_balancer=lb_config(rules=list(lb_rule_rdp),
probes=list(lb_probe_rdp)),
options=scaleset_options(keylogin=FALSE),
...)
{
options$keylogin <- FALSE
vmss_config(image_config("MicrosoftWindowsServer", "WindowsServer", "2019-Datacenter"),
options=options, datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
#' @rdname vmss_config
#' @export
windows_2019_gen2_ss <- function(datadisks=numeric(0),
nsg=nsg_config(list(nsg_rule_allow_rdp)),
load_balancer=lb_config(rules=list(lb_rule_rdp),
probes=list(lb_probe_rdp)),
options=scaleset_options(keylogin=FALSE),
...)
{
options$keylogin <- FALSE
vmss_config(image_config("MicrosoftWindowsServer", "WindowsServer", "2019-Datacenter-gensecond"),
options=options, datadisks=datadisks, nsg=nsg, load_balancer=load_balancer, ...)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/img_windows.R |
#' Public IP address configuration
#'
#' @param type The SKU of the IP address resource: "basic" or "standard". If NULL (the default), this will be determined based on the VM's configuration.
#' @param dynamic Whether the IP address should be dynamically or statically allocated. Note that the standard SKU only supports standard allocation. If NULL (the default) this will be determined based on the VM's configuration.
#' @param ipv6 Whether to create an IPv6 address. The default is IPv4.
#' @param domain_name The domain name label to associate with the address.
#' @param ... Other named arguments that will be treated as resource properties.
#'
#' @seealso
#' [create_vm], [vm_config], [vmss_config]
#' @examples
#' ip_config()
#' ip_config(type="basic", dynamic=TRUE)
#'
#' # if you don't want a domain name associated with the IP address
#' ip_config(domain_name=NULL)
#' @export
ip_config <- function(type=NULL, dynamic=NULL, ipv6=FALSE, domain_name="[parameters('vmName')]", ...)
{
# structure(list(properties=props, sku=list(name=type)), class="ip_config")
props <- list(
type=type,
dynamic=dynamic,
ipv6=ipv6,
domain_name=domain_name,
other=list(...)
)
structure(props, class="ip_config")
}
build_resource_fields.ip_config <- function(config, ...)
{
alloc <- if(config$dynamic) "dynamic" else "static"
version <- if(config$ipv6) "IPv6" else "IPv4"
props <- c(
list(
publicIPAllocationMethod=alloc,
publicIPAddressVersion=version
),
config$other)
if(!is.null(config$domain_name))
props$dnsSettings$domainNameLabel <- config$domain_name
sku <- list(name=config$type)
utils::modifyList(ip_default, list(properties=props, sku=sku))
}
add_template_variables.ip_config <- function(config, ...)
{
name <- "[concat(parameters('vmName'), '-ip')]"
id <- "[resourceId('Microsoft.Network/publicIPAddresses', variables('ipName'))]"
ref <- "[concat('Microsoft.Network/publicIPAddresses/', variables('ipName'))]"
list(ipName=name, ipId=id, ipRef=ref)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/ip_config.R |
#' Is an object an Azure VM
#'
#' @param object an R object.
#'
#' @return
#' `is_vm` and `is_vm_template` return TRUE for an object representing a virtual machine deployment (which will include other resources besides the VM itself).
#'
#' `is_vm_resource` returns TRUE for an object representing the specific VM resource.
#'
#' `is_vm_scaleset` and `is_vm_scaleset_template` return TRUE for an object representing a VM scaleset deployment.
#'
#' `is_vm_scaleset_resource` returns TRUE for an object representing the specific VM scaleset resource.
#'
#' @seealso
#' [create_vm], [create_vm_scaleset], [az_vm_template], [az_vm_resource], [az_vmss_template], [az_vmss_resource]
#' @export
is_vm <- function(object)
{
R6::is.R6(object) && inherits(object, "az_vm_template")
}
#' @rdname is_vm
#' @export
is_vm_template <- is_vm
#' @rdname is_vm
#' @export
is_vm_resource <- function(object)
{
R6::is.R6(object) && inherits(object, "az_vm_resource")
}
#' @rdname is_vm
#' @export
is_vm_scaleset <- function(object)
{
R6::is.R6(object) && inherits(object, "az_vmss_template")
}
#' @rdname is_vm
#' @export
is_vm_scaleset_template <- is_vm_scaleset
#' @rdname is_vm
#' @export
is_vm_scaleset_resource <- function(object)
{
R6::is.R6(object) && inherits(object, "az_vmss_resource")
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/is_vm.R |
#' Load balancer configuration
#'
#' @param type The SKU of the load balancer resource: "basic" or "standard". If NULL (the default), this will be determined based on the VM scaleset's configuration. Note that the load balancer SKU must be the same as that of its public IP address.
#' @param rules A list of load balancer rules, each obtained via a call to `lb_rule`.
#' @param probes A list of health checking probes, each obtained via a call to `lb_probe`. There must be a probe corresponding to each rule.
#' @param ... Other named arguments that will be treated as resource properties.
#' @param port For `lb_probe`, the port to probe.
#' @param interval For `lb_probe`, the time interval between probes in seconds.
#' @param fail_on For `lb_probe`, the probe health check will fail after this many non-responses.
#' @param protocol For `lb_probe` and `lb_rule`, the protocol: either "Tcp" or "Ip".
#' @param name For `lb_rule`, a name for the load balancing rule.
#' @param frontend_port,backend_port For `lb_rule`, the ports for this rule.
#' @param timeout The timeout interval for the rule. The default is 5 minutes.
#' @param floating_ip Whether to use floating IP addresses (direct server return). Only needed for specific scenarios, and when the frontend and backend ports don't match.
#' @param probe_name The name of the corresponding health check probe.
#'
#' @seealso
#' [create_vm_scaleset], [vmss_config], [lb_rules] for some predefined load balancing rules and probes
#' @examples
#' lb_config()
#' lb_config(type="basic")
#' lb_config(
#' rules=list(lb_rule_ssh, lb_rule_rdp),
#' probes=list(lb_probe_ssh, lb_probe_rdp)
#' )
#' @export
lb_config <- function(type=NULL, rules=list(), probes=list(), ...)
{
rule_probe_names <- sapply(rules, function(x) x$properties$probe$id)
probe_names <- sapply(probes, `[[`, "name")
# basic checking
for(r in rule_probe_names)
{
found <- FALSE
for(p in probe_names)
{
found <- grepl(p, r, fixed=TRUE)
if(found) break
}
if(!found)
stop("Rule with no matching probe: ", r, call.=FALSE)
}
props <- list(
type=type,
rules=rules,
probes=probes,
other=list(...)
)
structure(props, class="lb_config")
}
build_resource_fields.lb_config <- function(config, ...)
{
props <- c(
list(
loadBalancingRules=lapply(config$rules, unclass),
probes=lapply(config$probes, unclass)
),
config$other
)
sku <- list(name=config$type)
utils::modifyList(lb_default, list(properties=props, sku=sku))
}
add_template_variables.lb_config <- function(config, ...)
{
name <- "[concat(parameters('vmName'), '-lb')]"
id <- "[resourceId('Microsoft.Network/loadBalancers', variables('lbName'))]"
ref <- "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]"
frontend <- "frontend"
backend <- "backend"
frontend_id <- "[concat(variables('lbId'), '/frontendIPConfigurations/', variables('lbFrontendName'))]"
backend_id <- "[concat(variables('lbId'), '/backendAddressPools/', variables('lbBackendName'))]"
list(
lbName=name,
lbId=id,
lbRef=ref,
lbFrontendName=frontend,
lbBackendName=backend,
lbFrontendId=frontend_id,
lbBackendId=backend_id
)
}
#' @rdname lb_config
#' @export
lb_probe <- function(name, port, interval=5, fail_on=2, protocol="Tcp")
{
props <- list(
port=port,
intervalInSeconds=interval,
numberOfProbes=fail_on,
protocol=protocol
)
structure(list(name=name, properties=props), class="lb_probe")
}
#' @rdname lb_config
#' @export
lb_rule <- function(name, frontend_port, backend_port=frontend_port, protocol="Tcp", timeout=5,
floating_ip=FALSE, probe_name)
{
frontend_id <- "[variables('lbFrontendId')]"
backend_id <- "[variables('lbBackendId')]"
probe_id <- sprintf("[concat(variables('lbId'), '/probes/%s')]", probe_name)
props <- list(
frontendIpConfiguration=list(id=frontend_id),
backendAddressPool=list(id=backend_id),
protocol=protocol,
frontendPort=frontend_port,
backendPort=backend_port,
enableFloatingIp=floating_ip,
idleTimeoutInMinutes=timeout,
probe=list(id=probe_id)
)
structure(list(name=name, properties=props), class="lb_rule")
}
#' Load balancing rules
#'
#' @format
#' Objects of class `lb_rule` and `lb_probe`.
#' @details
#' Some predefined load balancing objects, for commonly used ports. Each load balancing rule comes with its own health probe.
#' - HTTP: TCP port 80
#' - HTTPS: TCP port 443
#' - JupyterHub: TCP port 8000
#' - RDP: TCP port 3389
#' - RStudio Server: TCP port 8787
#' - SSH: TCP port 22
#' - SQL Server: TCP port 1433
#' - SQL Server browser service: TCP port 1434
#' @docType data
#' @seealso
#' [lb_config]
#' @rdname lb_rules
#' @aliases lb_rules
#' @export
lb_rule_ssh <- lb_rule("lb-ssh", 22, 22, probe_name="probe-ssh")
#' @rdname lb_rules
#' @export
lb_rule_http <- lb_rule("lb-http", 80, 80, probe_name="probe-http")
#' @rdname lb_rules
#' @export
lb_rule_https <- lb_rule("lb-https", 443, 443, probe_name="probe-https")
#' @rdname lb_rules
#' @export
lb_rule_rdp <- lb_rule("lb-rdp", 3389, 3389, probe_name="probe-rdp")
#' @rdname lb_rules
#' @export
lb_rule_jupyter <- lb_rule("lb-jupyter", 8000, 8000, probe_name="probe-jupyter")
#' @rdname lb_rules
#' @export
lb_rule_rstudio <- lb_rule("lb-rstudio", 8787, 8787, probe_name="probe-rstudio")
#' @rdname lb_rules
#' @export
lb_rule_mssql <- lb_rule("lb-mssql", 1433, 1433, probe_name="probe-mssql")
#' @rdname lb_rules
#' @export
lb_rule_mssql_browser <- lb_rule("lb-mssql-browser", 1434, 1434, probe_name="probe-mssql-browser")
#' @rdname lb_rules
#' @export
lb_probe_ssh <- lb_probe("probe-ssh", 22)
#' @rdname lb_rules
#' @export
lb_probe_http <- lb_probe("probe-http", 80)
#' @rdname lb_rules
#' @export
lb_probe_https <- lb_probe("probe-https", 443)
#' @rdname lb_rules
#' @export
lb_probe_rdp <- lb_probe("probe-rdp", 3389)
#' @rdname lb_rules
#' @export
lb_probe_jupyter <- lb_probe("probe-jupyter", 8000)
#' @rdname lb_rules
#' @export
lb_probe_rstudio <- lb_probe("probe-rstudio", 8787)
#' @rdname lb_rules
#' @export
lb_probe_mssql <- lb_probe("probe-mssql", 1433)
#' @rdname lb_rules
#' @export
lb_probe_mssql_browser <- lb_probe("probe-mssql-browser", 1434)
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/lb_config.R |
lapply(dir("inst/tpl", pattern="\\.json$"), function(f)
{
objname <- sub("\\.json$", "", f)
obj <- jsonlite::fromJSON(file.path("inst/tpl", f), simplifyVector=FALSE)
assign(objname, obj, parent.env(environment()))
})
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/load_tpls.R |
#' Network interface configuration
#'
#' @param nic_ip For `nic_config`, a list of IP configuration objects, each obtained via a call to `nic_ip_config`.
#' @param name For `nic_ip_config`, the name of the IP configuration.
#' @param private_alloc For `nic_ip_config`, the allocation method for a private IP address. Can be "dynamic" or "static".
#' @param subnet For `nic_ip_config`, the subnet to associate with this private IP address.
#' @param public_address For `nic_ip_config`, the public IP address. Defaults to the public IP address created or used as part of this VM deployment. Ignored if the deployment does not include a public address.
#' @param ... Other named arguments that will be treated as resource properties.
#'
#' @seealso
#' [create_vm], [vm_config]
#' @examples
#' nic_config()
#' @export
nic_config <- function(nic_ip=list(nic_ip_config()), ...)
{
# unique-ify ip config names
if(length(nic_ip) > 1)
{
ip_names <- make.unique(sapply(nic_ip, `[[`, "name"))
ip_names[1] <- paste0(ip_names[1], "0")
for(i in seq_along(nic_ip))
nic_ip[[i]]$name <- ip_names[i]
}
props <- list(ipConfigurations=nic_ip, ...)
structure(list(properties=props), class="nic_config")
}
build_resource_fields.nic_config <- function(config)
{
config$properties$ipConfigurations <- lapply(config$properties$ipConfigurations, unclass)
utils::modifyList(nic_default, config)
}
add_template_variables.nic_config <- function(config, ...)
{
name <- "[concat(parameters('vmName'), '-nic')]"
id <- "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
ref <- "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
list(nicName=name, nicId=id, nicRef=ref)
}
#' @rdname nic_config
#' @export
nic_ip_config <- function(name="ipconfig", private_alloc="dynamic", subnet="[variables('subnetId')]",
public_address="[variables('ipId')]", ...)
{
props <- list(
privateIPAllocationMethod=private_alloc,
subnet=list(id=subnet),
publicIPAddress=list(id=public_address),
...
)
structure(list(name=name, properties=props), class="nic_ip_config")
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/nic_config.R |
#' Network security group configuration
#'
#' @param rules for `nsg_config`, a list of security rule objects, each obtained via a call to `nsg_rule`.
#' @param dest_port,dest_addr,dest_asgs For `nsg_rule`, the destination port, address range, and application security groups for a rule.
#' @param source_port,source_addr,source_asgs For `nsg_rule`, the source port, address range, and application security groups for a rule.
#' @param ... Other named arguments that will be treated as resource properties.
#' @param name For `nsg_rule`, a name for the rule.
#' @param access For `nsg_rule`, the action to take: "allow" or "deny".
#' @param direction For `nsg_rule`, the direction of traffic: "inbound" or "outbound".
#' @param protocol For `nsg_rule`, the network protocol: either "Tcp" or "Udp".
#' @param priority For `nsg_rule`, the rule priority. If NULL, this will be set automatically by AzureVM.
#'
#' @seealso
#' [create_vm], [vm_config], [vmss_config], [nsg_rules] for some predefined security rules
#' @examples
#' nsg_config()
#' nsg_config(list(nsg_rule_allow_ssh)) # for Linux
#' nsg_config(list(nsg_rule_allow_rdp)) # for Windows
#' nsg_config(list(nsg_rule_allow_http, nsg_rule_allow_https))
#'
#' # a custom rule
#' nsg_config(list(
#' nsg_rule(
#' name="whitelist",
#' source_addr="114.198.100.0/24",
#' access="allow",
#' protocol="*"
#' )
#' ))
#' @export
nsg_config <- function(rules=list(), ...)
{
stopifnot(is.list(rules))
props <- list(securityRules=rules, ...)
structure(list(properties=props), class="nsg_config")
}
build_resource_fields.nsg_config <- function(config, ...)
{
for(i in seq_along(config$properties$securityRules))
{
# fixup nsg security rule priorities
if(is_empty(config$properties$securityRules[[i]]$properties$priority))
config$properties$securityRules[[i]]$properties$priority <- 1000 + 10 * i
config$properties$securityRules[[i]] <- unclass(config$properties$securityRules[[i]])
}
utils::modifyList(nsg_default, config)
}
add_template_variables.nsg_config <- function(config, ...)
{
name <- "[concat(parameters('vmName'), '-nsg')]"
id <- "[resourceId('Microsoft.Network/networkSecurityGroups', variables('nsgName'))]"
ref <- "[concat('Microsoft.Network/networkSecurityGroups/', variables('nsgName'))]"
list(nsgName=name, nsgId=id, nsgRef=ref)
}
#' @rdname nsg_config
#' @export
nsg_rule <- function(name, dest_port="*", dest_addr="*", dest_asgs=NULL,
source_port="*", source_addr="*", source_asgs=NULL,
access="allow", direction="inbound",
protocol="Tcp", priority=NULL)
{
if(is_empty(dest_asgs))
dest_asgs <- logical(0)
if(is_empty(source_asgs))
source_asgs <- logical(0)
properties <- list(
protocol=protocol,
access=access,
direction=direction,
sourceApplicationSecurityGroups=source_asgs,
destinationApplicationSecurityGroups=dest_asgs,
sourceAddressPrefix=source_addr,
sourcePortRange=as.character(source_port),
destinationAddressPrefix=dest_addr,
destinationPortRange=as.character(dest_port)
)
if(!is_empty(priority))
properties$priority <- priority
structure(list(name=name, properties=properties), class="nsg_rule")
}
#' Network security rules
#'
#' @format
#' Objects of class `nsg_rule`.
#' @details
#' Some predefined network security rule objects, to unblock commonly used ports.
#' - HTTP: TCP port 80
#' - HTTPS: TCP port 443
#' - JupyterHub: TCP port 8000
#' - RDP: TCP port 3389
#' - RStudio Server: TCP port 8787
#' - SSH: TCP port 22
#' - SQL Server: TCP port 1433
#' - SQL Server browser service: TCP port 1434
#' @docType data
#' @seealso
#' [nsg_config]
#' @rdname nsg_rules
#' @aliases nsg_rules
#' @export
nsg_rule_allow_ssh <- nsg_rule("Allow-ssh", 22)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_http <- nsg_rule("Allow-http", 80)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_https <- nsg_rule("Allow-https", 443)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_rdp <- nsg_rule("Allow-rdp", 3389)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_jupyter <- nsg_rule("Allow-jupyter", 8000)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_rstudio <- nsg_rule("Allow-rstudio", 8787)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_mssql <- nsg_rule("Allow-mssql", 1433)
#' @rdname nsg_rules
#' @export
nsg_rule_allow_mssql_browser <- nsg_rule("Allow-mssql-browser", 1434)
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/nsg_config.R |
#' VM configuration functions
#'
#' @param image For `vm_config`, the VM image to deploy. This should be an object of class `image_config`, created by the function of the same name.
#' @param keylogin Whether to use an SSH public key to login (TRUE) or a password (FALSE). Note that Windows does not support SSH key logins.
#' @param managed_identity Whether to provide a managed system identity for the VM.
#' @param datadisks The data disks to attach to the VM. Specify this as either a vector of numeric disk sizes in GB, or a list of `datadisk_config` objects for more control over the specification.
#' @param os_disk_type The type of primary disk for the VM. Can be "Premium_LRS" (the default), "StandardSSD_LRS", or "Standard_LRS". Of these, "Standard_LRS" uses hard disks and the others use SSDs as the underlying hardware. Change this to "StandardSSD_LRS" or "Standard_LRS" if the VM size doesn't support premium storage.
#' @param nsg The network security group for the VM. Can be a call to `nsg_config` to create a new NSG; an AzureRMR resource object or resource ID to reuse an existing NSG; or NULL to not use an NSG (not recommended).
#' @param ip The public IP address for the VM. Can be a call to `ip_config` to create a new IP address; an AzureRMR resource object or resource ID to reuse an existing address resource; or NULL if the VM should not be accessible from outside its subnet.
#' @param vnet The virtual network for the VM. Can be a call to `vnet_config` to create a new virtual network, or an AzureRMR resource object or resource ID to reuse an existing virtual network. Note that by default, AzureVM will associate the NSG with the virtual network/subnet, not with the VM's network interface.
#' @param nic The network interface for the VM. Can be a call to `nic_config` to create a new interface, or an AzureRMR resource object or resource ID to reuse an existing interface.
#' @param other_resources An optional list of other resources to include in the deployment.
#' @param variables An optional named list of variables to add to the template.
#' @param ... For the specific VM configurations, other customisation arguments to be passed to `vm_config`. For `vm_config`, named arguments that will be folded into the VM resource definition in the template. In particular use `properties=list(*)` to set additional properties for the VM, beyond those set by the various configuration functions.
#'
#' @details
#' These functions are for specifying the details of a new virtual machine deployment: the VM image and related options, along with the Azure resources that the VM may need. These include the datadisks, network security group, public IP address (if the VM is to be accessible from outside its subnet), virtual network, and network interface. `vm_config` is the base configuration function, and the others call it to create VMs with specific operating systems and other image details.
#' - `ubuntu_dsvm`: Data Science Virtual Machine, based on Ubuntu 18.04
#' - `windows_dsvm`: Data Science Virtual Machine, based on Windows Server 2019
#' - `ubuntu_16.04`, `ubuntu_18.04`, `ubuntu_20.04`, `ubuntu_20.04_gen2`: Ubuntu LTS
#' - `windows_2016`, `windows_2019`: Windows Server Datacenter edition
#' - `rhel_7.6`, `rhel_8`, `rhel_8.1`, `rhel_8.1_gen2`, `rhel_8.2`, ``rhel_8.2_gen2: Red Hat Enterprise Linux
#' - `centos_7.5`, `centos_7.6`, `centos_8.1`: CentOS
#' - `debian_8_backports`, `debian_9_backports`, `debian_10_backports`, `debian_10_backports_gen2`: Debian with backports
#'
#' The VM configurations with `gen2` in the name are [generation 2 VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/generation-2), which feature several technical improvements over the earlier generation 1. Consider using these for greater efficiency, however note that gen2 VMs are only available for select images and do not support all possible VM sizes.
#'
#' Each resource can be specified in a number of ways:
#' - To _create_ a new resource as part of the deployment, call the corresponding `*_config` function.
#' - To use an _existing_ resource, supply either an `AzureRMR::az_resource` object (recommended) or a string containing the resource ID.
#' - If the resource is not needed, specify it as NULL.
#' - For the `other_resources` argument, supply a list of resources, each of which should be a list of resource fields (name, type, properties, sku, etc).
#'
#' A VM configuration defines the following template variables by default, depending on its resources. If a particular resource is created, the corresponding `*Name`, `*Id` and `*Ref` variables will be available. If a resource is referred to but not created, the `*Name*` and `*Id` variables will be available. Other variables can be defined via the `variables` argument.
#'
#' \tabular{lll}{
#' **Variable name** \tab **Contents** \tab **Description** \cr
#' `location` \tab `[resourceGroup().location]` \tab Region to deploy resources \cr
#' `vmId` \tab `[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]` \tab VM resource ID \cr
#' `vmRef` \tab `[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]` \tab VM template reference \cr
#' `nsgName` \tab `[concat(parameters('vmName'), '-nsg')]` \tab Network security group resource name \cr
#' `nsgId` \tab `[resourceId('Microsoft.Network/networkSecurityGroups', variables('nsgName'))]` \tab NSG resource ID \cr
#' `nsgRef` \tab `[concat('Microsoft.Network/networkSecurityGroups/', variables('nsgName'))]` \tab NSG template reference \cr
#' `ipName` \tab `[concat(parameters('vmName'), '-ip')]` \tab Public IP address resource name \cr
#' `ipId` \tab `[resourceId('Microsoft.Network/publicIPAddresses', variables('ipName'))]` \tab IP resource ID \cr
#' `ipRef` \tab `[concat('Microsoft.Network/publicIPAddresses/', variables('ipName'))]` \tab IP template reference \cr
#' `vnetName` \tab `[concat(parameters('vmName'), '-vnet')]` \tab Virtual network resource name \cr
#' `vnetId` \tab `[resourceId('Microsoft.Network/virtualNetworks', variables('vnetName'))]` \tab Vnet resource ID \cr
#' `vnetRef` \tab `[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]` \tab Vnet template reference \cr
#' `subnet` \tab `subnet` \tab Subnet name. Only defined if a Vnet was created or supplied as an `az_resource` object. \cr
#' `subnetId` \tab `[concat(variables('vnetId'), '/subnets/', variables('subnet'))]` \tab Subnet resource ID. Only defined if a Vnet was created or supplied as an `az_resource` object. \cr
#' `nicName` \tab `[concat(parameters('vmName'), '-nic')]` \tab Network interface resource name \cr
#' `nicId` \tab `[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]` \tab NIC resource ID \cr
#' `nicRef` \tab `[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]` \tab NIC template reference
#' }
#'
#' Thus, for example, if you are creating a VM named "myvm" along with all its associated resources, the NSG is named "myvm-nsg", the public IP address is "myvm-ip", the virtual network is "myvm-vnet", and the network interface is "myvm-nic".
#'
#' @return
#' An object of S3 class `vm_config`, that can be used by the `create_vm` method.
#'
#' @seealso
#' [image_config], [user_config], [datadisk_config] for options relating to the VM resource itself
#'
#' [nsg_config], [ip_config], [vnet_config], [nic_config] for other resource configs
#'
#' [build_template] for template builder methods
#'
#' [vmss_config] for configuring a virtual machine scaleset
#'
#' [create_vm]
#'
#' @examples
#'
#' # basic Linux (Ubuntu) and Windows configs
#' ubuntu_18.04()
#' windows_2019()
#'
#' # Windows DSVM with 500GB data disk, no public IP address
#' windows_dsvm(datadisks=500, ip=NULL)
#'
#' # RHEL VM exposing ports 80 (HTTP) and 443 (HTTPS)
#' rhel_8(nsg=nsg_config(nsg_rule_allow_http, nsg_rule_allow_https))
#'
#' # exposing no ports externally, spot (low) priority
#' rhel_8(nsg=nsg_config(list()), properties=list(priority="spot"))
#'
#' # deploying an extra resource: storage account
#' ubuntu_18.04(
#' variables=list(storName="[concat(parameters('vmName'), 'stor')]"),
#' other_resources=list(
#' list(
#' type="Microsoft.Storage/storageAccounts",
#' name="[variables('storName')]",
#' apiVersion="2018-07-01",
#' location="[variables('location')]",
#' properties=list(supportsHttpsTrafficOnly=TRUE),
#' sku=list(name="Standard_LRS"),
#' kind="Storage"
#' )
#' )
#' )
#'
#' ## custom VM configuration: Windows 10 Pro 1903 with data disks
#' ## this assumes you have a valid Win10 desktop license
#' user <- user_config("myname", password="Use-strong-passwords!")
#' image <- image_config(
#' publisher="MicrosoftWindowsDesktop",
#' offer="Windows-10",
#' sku="19h1-pro"
#' )
#' datadisks <- list(
#' datadisk_config(250, type="Premium_LRS"),
#' datadisk_config(1000, type="Standard_LRS")
#' )
#' nsg <- nsg_config(
#' list(nsg_rule_allow_rdp)
#' )
#' vm_config(
#' image=image,
#' keylogin=FALSE,
#' datadisks=datadisks,
#' nsg=nsg,
#' properties=list(licenseType="Windows_Client")
#' )
#'
#'
#' \dontrun{
#'
#' # reusing existing resources: placing multiple VMs in one vnet/subnet
#' rg <- AzureRMR::get_azure_login()$
#' get_subscription("sub_id")$
#' get_resource_group("rgname")
#'
#' vnet <- rg$get_resource(type="Microsoft.Network/virtualNetworks", name="myvnet")
#'
#' # by default, the NSG is associated with the subnet, so we don't need a new NSG either
#' vmconfig1 <- ubuntu_18.04(vnet=vnet, nsg=NULL)
#' vmconfig2 <- debian_9_backports(vnet=vnet, nsg=NULL)
#' vmconfig3 <- windows_2019(vnet=vnet, nsg=NULL)
#'
#' }
#' @export
vm_config <- function(image, keylogin, managed_identity=TRUE,
os_disk_type=c("Premium_LRS", "StandardSSD_LRS", "Standard_LRS"),
datadisks=numeric(0),
nsg=nsg_config(),
ip=ip_config(),
vnet=vnet_config(),
nic=nic_config(),
other_resources=list(),
variables=list(),
...)
{
if(is.numeric(datadisks))
datadisks <- lapply(datadisks, datadisk_config)
stopifnot(inherits(image, "image_config"))
stopifnot(is.list(datadisks) && all(sapply(datadisks, inherits, "datadisk_config")))
ip <- vm_fixup_ip(ip)
obj <- list(
image=image,
keylogin=keylogin,
managed_identity=managed_identity,
os_disk_type=match.arg(os_disk_type),
datadisks=datadisks,
nsg=nsg,
ip=ip,
vnet=vnet,
nic=nic,
other=other_resources,
variables=variables,
vm_fields=list(...)
)
structure(obj, class="vm_config")
}
vm_fixup_ip <- function(ip)
{
# don't try to fix IP if not created here
if(is.null(ip) || !inherits(ip, "ip_config"))
return(ip)
# default for a regular VM: sku=basic, allocation=dynamic
if(is.null(ip$type))
ip$type <- "basic"
if(is.null(ip$dynamic))
ip$dynamic <- tolower(ip$type) == "basic"
# check consistency
if(tolower(ip$type) == "standard" && ip$dynamic)
stop("Standard IP address type does not support dynamic allocation", call.=FALSE)
ip
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/vm_config.R |
#' Resource configuration functions for a virtual machine deployment
#'
#' @param username For `user_config`, the name for the admin user account.
#' @param sshkey For `user_config`, the SSH public key. This can be supplied in a number of ways: as a string with the key itself; the name of the public key file; or an `AzureRMR::az_resource` object pointing to an SSH public key resource (of type "Microsoft.Compute/sshPublicKeys"). See the examples below.
#' @param password For `user_config`, the admin password. Supply either `sshkey` or `password`, but not both; also, note that Windows does not support SSH logins.
#' @param size For `datadisk_config`, the size of the data disk in GB. St this to NULL for a disk that will be created from an image.
#' @param name For `datadisk_config`, the disk name. Duplicate names will automatically be disambiguated prior to VM deployment.
#' @param create For `datadisk_config`, the creation method. Can be "empty" (the default) to create a blank disk, or "fromImage" to use an image.
#' @param type For `datadisk_config`, the disk type (SKU). Can be "Standard_LRS", "StandardSSD_LRS" (the default), "Premium_LRS" or "UltraSSD_LRS". Of these, "Standard_LRS" uses hard disks and the others use SSDs as the underlying hardware.
#' @param write_accelerator For `datadisk_config`, whether the disk should have write acceleration enabled.
#' @param publisher,offer,sku,version For `image_config`, the details for a marketplace image.
#' @param id For `image_config`, the resource ID for a disk to use as a custom image.
#'
#' @examples
#' \dontrun{
#'
#' ## user_config: SSH public key resource in Azure
#' # create the resource
#' keyres <- rg$create_resource(type="Microsoft.Compute/sshPublicKeys", name="mysshkey")
#'
#' # generate the public and private keys
#' keys <- keyres$do_operation("generateKeyPair", http_verb="POST")
#' keyres$sync_fields()
#'
#' # save the private key (IMPORTANT)
#' writeBin(keys$privateKey, "mysshkey.pem")
#'
#' # create a new VM using the public key resource for authentication
#' # you can then login to the VM with ssh -i mysshkey.pem <username@vmaddress>
#' rg$create_vm("myvm", user_config("username", sshkey=keyres), config="ubuntu_20.04")
#'
#'
#' ## user_config: SSH public key as a file
#' rg$create_vm("myvm", user_config("username", sshkey="mysshkey.pub"), config="ubuntu_20.04")
#'
#'
#' ## user_config: SSH public key as a string (read from a file)
#' pubkey <- readLines("mysshkey.pub")
#' rg$create_vm("myvm", user_config("username", sshkey=pubkey), config="ubuntu_20.04")
#'
#' }
#'
#' @rdname vm_resource_config
#' @export
user_config <- function(username, sshkey=NULL, password=NULL)
{
keyres <- is_resource(sshkey) && tolower(sshkey$type) == "microsoft.compute/sshpublickeys"
key <- is.character(sshkey) || keyres
pwd <- is.character(password)
if(!pwd && !key)
stop("Must supply either a login password or SSH key", call.=FALSE)
if(pwd && key)
stop("Supply either a login password or SSH key, but not both", call.=FALSE)
if(keyres)
{
sshkey <- gsub("\r\n", "", sshkey$properties$publicKey)
if(is_empty(sshkey))
stop("Supplied public key resource is uninitialized, run generateKeyPair first and save the returned keys",
call.=FALSE)
}
else if(key && file.exists(sshkey))
sshkey <- readLines(sshkey)
structure(list(user=username, key=sshkey, pwd=password), class="user_config")
}
#' @rdname vm_resource_config
#' @export
datadisk_config <- function(size, name="datadisk", create="empty",
type=c("StandardSSD_LRS", "Premium_LRS", "Standard_LRS", "UltraSSD_LRS"),
write_accelerator=FALSE)
{
type <- match.arg(type)
vm_caching <- if(type == "Premium_LRS") "ReadOnly" else "None"
vm_create <- if(create == "empty") "attach" else "fromImage"
vm_storage <- if(create == "empty") NULL else type
vm_spec <- list(
createOption=vm_create,
caching=vm_caching,
writeAcceleratorEnabled=write_accelerator,
storageAccountType=vm_storage,
diskSizeGB=NULL,
id=NULL,
name=name
)
res_spec <- if(!is.null(size))
list(
diskSizeGB=size,
sku=type,
creationData=list(createOption=create),
name=name
)
else NULL
structure(list(res_spec=res_spec, vm_spec=vm_spec), class="datadisk_config")
}
#' @rdname vm_resource_config
#' @export
image_config <- function(publisher=NULL, offer=NULL, sku=NULL, version="latest", id=NULL)
{
if(!is.null(publisher) && !is.null(offer) && !is.null(sku))
{
structure(list(publisher=publisher, offer=offer, sku=sku, version=version),
class=c("image_marketplace", "image_config"))
}
else if(!is.null(id))
{
structure(list(id=id),
class=c("image_custom", "image_config"))
}
else stop("Invalid image configuration", call.=FALSE)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/vm_resource_config.R |
add_template_parameters.vm_config <- function(config, ...)
{
add_param <- function(...)
{
new_params <- lapply(list(...), function(obj) list(type=obj))
params <<- c(params, new_params)
}
params <- tpl_parameters_default
if(config$keylogin)
add_param(sshKeyData="string")
else add_param(adminPassword="securestring")
if(inherits(config$image, "image_marketplace"))
add_param(imagePublisher="string", imageOffer="string", imageSku="string", imageVersion="string")
else add_param(imageId="string")
if(length(config$datadisks) > 0)
add_param(dataDisks="array", dataDiskResources="array")
params
}
add_template_variables.vm_config <- function(config, ...)
{
vars <- list(
location="[resourceGroup().location]",
vmId="[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]",
vmRef="[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
)
for(res in c("nsg", "ip", "vnet", "nic"))
vars <- c(vars, add_template_variables(config[[res]], res))
# add any extra variables provided by the user
utils::modifyList(vars, config$variables)
}
add_template_resources.vm_config <- function(config, ...)
{
vm <- vm_default
# fixup VM properties
n_disks <- length(config$datadisks)
n_disk_resources <- if(n_disks > 0)
sum(sapply(config$datadisks, function(x) !is.null(x$res_spec)))
else 0
if(n_disks > 0)
vm$properties$storageProfile$copy <- vm_datadisk
if(config$managed_identity)
vm$identity <- list(type="systemAssigned")
vm$properties$storageProfile$osDisk$managedDisk$storageAccountType <- config$os_disk_type
vm$properties$osProfile <- c(vm$properties$osProfile,
if(config$keylogin) vm_key_login else vm_pwd_login)
if(inherits(config$image, "image_custom"))
vm$properties$storageProfile$imageReference <- list(id="[parameters('imageId')]")
if(!is_empty(config$vm_fields))
vm <- utils::modifyList(vm, config$vm_fields)
resources <- config[c("nsg", "ip", "vnet", "nic")]
existing <- sapply(resources, existing_resource)
unused <- sapply(resources, is.null)
create <- !existing & !unused
# cannot use lapply(*, build_resource_fields) because of lapply wart
resources <- lapply(resources[create], function(x) build_resource_fields(x))
## fixup dependencies between resources
# vnet depends on nsg
# nic depends on ip, vnet (possibly nsg)
# vm depends on nic (but nic should always be created)
if(create["vnet"])
{
if(!create["nsg"])
resources$vnet$dependsOn <- NULL
if(unused["nsg"])
resources$vnet$properties$subnets[[1]]$properties$networkSecurityGroup <- NULL
}
if(create["nic"])
{
nic_created_depends <- create[c("ip", "vnet")]
resources$nic$dependsOn <- resources$nic$dependsOn[nic_created_depends]
if(unused["ip"])
resources$nic$properties$ipConfigurations[[1]]$properties$publicIPAddress <- NULL
}
else vm$dependsOn <- NULL
if(n_disk_resources > 0)
{
resources <- c(resources, list(disk_default))
if(n_disks > 0)
vm$dependsOn <- c(vm$dependsOn, "managedDiskResources")
}
resources <- c(resources, list(vm))
if(!is_empty(config$other))
resources <- c(resources, lapply(config$other, function(x) build_resource_fields(x)))
unname(resources)
}
# check if we are referring to an existing resource or creating a new one
existing_resource <- function(object)
{
# can be a resource ID string or AzureRMR::az_resource object
is.character(object) || is_resource(object)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/vm_template_builders.R |
#' Virtual machine scaleset configuration functions
#'
#' @param image For `vmss_config`, the VM image to deploy. This should be an object of class `image_config`, created by the function of the same name.
#' @param options Scaleset options, as obtained via a call to `scaleset_options`.
#' @param datadisks The data disks to attach to the VM. Specify this as either a vector of numeric disk sizes in GB, or a list of `datadisk_config` objects for more control over the specification.
#' @param nsg The network security group for the scaleset. Can be a call to `nsg_config` to create a new NSG; an AzureRMR resource object or resource ID to reuse an existing NSG; or NULL to not use an NSG (not recommended).
#' @param vnet The virtual network for the scaleset. Can be a call to `vnet_config` to create a new virtual network, or an AzureRMR resource object or resource ID to reuse an existing virtual network. Note that by default, AzureVM will associate the NSG with the virtual network/subnet, not with the VM's network interface.
#' @param load_balancer The load balancer for the scaleset. Can be a call to `lb_config` to create a new load balancer; an AzureRMR resource object or resource ID to reuse an existing load balancer; or NULL if load balancing is not required.
#' @param load_balancer_address The public IP address for the load balancer. Can be a call to `ip_config` to create a new IP address, or an AzureRMR resource object or resource ID to reuse an existing address resource. Ignored if `load_balancer` is NULL.
#' @param autoscaler The autoscaler for the scaleset. Can be a call to `autoscaler_config` to create a new autoscaler; an AzureRMR resource object or resource ID to reuse an existing autoscaler; or NULL if autoscaling is not required.
#' @param other_resources An optional list of other resources to include in the deployment.
#' @param variables An optional named list of variables to add to the template.
#' @param ... For the specific VM configurations, other customisation arguments to be passed to `vm_config`. For `vmss_config`, named arguments that will be folded into the scaleset resource definition in the template.
#'
#' @details
#' These functions are for specifying the details of a new virtual machine scaleset deployment: the base VM image and related options, along with the Azure resources that the scaleset may need. These include the network security group, virtual network, load balancer and associated public IP address, and autoscaler.
#'
#' Each resource can be specified in a number of ways:
#' - To _create_ a new resource as part of the deployment, call the corresponding `*_config` function.
#' - To use an _existing_ resource, supply either an `AzureRMR::az_resource` object (recommended) or a string containing the resource ID.
#' - If the resource is not needed, specify it as NULL.
#' - For the `other_resources` argument, supply a list of resources, each of which should be a list of resource fields (name, type, properties, sku, etc).
#'
#' The `vmss_config` function is the base configuration function, and the others call it to create VM scalesets with specific operating systems and other image details.
#' - `ubuntu_dsvm_ss`: Data Science Virtual Machine, based on Ubuntu 18.04
#' - `windows_dsvm_ss`: Data Science Virtual Machine, based on Windows Server 2019
#' - `ubuntu_16.04_ss`, `ubuntu_18.04_ss`, `ubuntu_20.04_ss`, `ubuntu_20.04_gen2_ss`: Ubuntu LTS
#' - `windows_2016_ss`, `windows_2019_ss`: Windows Server Datacenter edition
#' - `rhel_7.6_ss`, `rhel_8_ss`, `rhel_8.1_ss`, `rhel_8.1_gen2_ss`, `rhel_8.2_ss`, `rhel_8.2_gen2_ss`: Red Hat Enterprise Linux
#' - `centos_7.5_ss`, `centos_7.6_ss`, `centos_8.1_ss`: CentOS
#' - `debian_8_backports_ss`, `debian_9_backports_ss`, `debian_10_backports_ss`, `debian_10_backports_gen2_ss`: Debian with backports
#'
#' The VM scaleset configurations with `gen2` in the name use [generation 2 VMs](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/generation-2), which feature several technical improvements over the earlier generation 1. Consider using these for greater efficiency, however note that gen2 VMs are only available for select images and do not support all possible VM sizes.
#'
#' A VM scaleset configuration defines the following template variables by default, depending on its resources. If a particular resource is created, the corresponding `*Name`, `*Id` and `*Ref` variables will be available. If a resource is referred to but not created, the `*Name*` and `*Id` variables will be available. Other variables can be defined via the `variables` argument.
#'
#' \tabular{lll}{
#' **Variable name** \tab **Contents** \tab **Description** \cr
#' `location` \tab `[resourceGroup().location]` \tab Region to deploy resources \cr
#' `vmId` \tab `[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]` \tab VM scaleset resource ID \cr
#' `vmRef` \tab `[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]` \tab Scaleset template reference \cr
#' `nsgName` \tab `[concat(parameters('vmName'), '-nsg')]` \tab Network security group resource name \cr
#' `nsgId` \tab `[resourceId('Microsoft.Network/networkSecurityGroups', variables('nsgName'))]` \tab NSG resource ID \cr
#' `nsgRef` \tab `[concat('Microsoft.Network/networkSecurityGroups/', variables('nsgName'))]` \tab NSG template reference \cr
#' `vnetName` \tab `[concat(parameters('vmName'), '-vnet')]` \tab Virtual network resource name \cr
#' `vnetId` \tab `[resourceId('Microsoft.Network/virtualNetworks', variables('vnetName'))]` \tab Vnet resource ID \cr
#' `vnetRef` \tab `[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]` \tab Vnet template reference \cr
#' `subnet` \tab `subnet` \tab Subnet name. Only defined if a Vnet was created or supplied as an `az_resource` object. \cr
#' `subnetId` \tab `[concat(variables('vnetId'), '/subnets/', variables('subnet'))]` \tab Subnet resource ID. Only defined if a Vnet was created or supplied as an `az_resource` object. \cr
#' `lbName` \tab `[concat(parameters('vmName'), '-lb')]` \tab Load balancer resource name \cr
#' `lbId` \tab `[resourceId('Microsoft.Network/loadBalancers', variables('lbName'))]` \tab Load balancer resource ID \cr
#' `lbRef` \tab `[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]` \tab Load balancer template reference \cr
#' `lbFrontendName` \tab `frontend` \tab Load balancer frontend IP configuration name. Only defined if a load balancer was created or supplied as an `az_resource` object. \cr
#' `lbBackendName` \tab `backend` \tab Load balancer backend address pool name. Only defined if a load balancer was created or supplied as an `az_resource` object. \cr
#' `lbFrontendId` \tab `[concat(variables('lbId'), '/frontendIPConfigurations/', variables('lbFrontendName'))]` \tab Load balancer frontend resource ID. Only defined if a load balancer was created or supplied as an `az_resource` object. \cr
#' `lbBackendId` \tab `[concat(variables('lbId'), '/backendAddressPools/', variables('lbBackendName'))]` \tab Load balancer backend resource ID. Only defined if a load balancer was created or supplied as an `az_resource` object. \cr
#' `ipName` \tab `[concat(parameters('vmName'), '-ip')]` \tab Public IP address resource name \cr
#' `ipId` \tab `[resourceId('Microsoft.Network/publicIPAddresses', variables('ipName'))]` \tab IP resource ID \cr
#' `ipRef` \tab `[concat('Microsoft.Network/publicIPAddresses/', variables('ipName'))]` \tab IP template reference \cr
#' `asName` \tab `[concat(parameters('vmName'), '-as')]` \tab Autoscaler resource name \cr
#' `asId` \tab `[resourceId('Microsoft.Insights/autoscaleSettings', variables('asName'))]` \tab Autoscaler resource ID \cr
#' `asRef` \tab `[concat('Microsoft.Insights/autoscaleSettings/', variables('asName'))]` \tab Autoscaler template reference \cr
#' `asMaxCapacity` \tab `[mul(int(parameters('instanceCount')), 10)]` \tab Maximum capacity for the autoscaler. Only defined if an autoscaler was created. \cr
#' `asScaleValue` \tab `[max(div(int(parameters('instanceCount')), 5), 1)]` \tab Default capacity for the autoscaler. Only defined if an autoscaler was created.
#' }
#'
#' Thus, for example, if you are creating a VM scaleset named "myvmss" along with all its associated resources, the NSG is named "myvmss-nsg", the virtual network is "myvmss-vnet", the load balancer is "myvmss-lb", the public IP address is "myvmss-ip", and the autoscaler is "myvm-as".
#'
#' @return
#' An object of S3 class `vmss_config`, that can be used by the `create_vm_scaleset` method.
#'
#' @seealso
#' [scaleset_options] for options relating to the scaleset resource itself
#'
#' [nsg_config], [ip_config], [vnet_config], [lb_config], [autoscaler_config] for other resource configs
#'
#' [build_template] for template builder methods
#'
#' [vm_config] for configuring an individual virtual machine
#'
#' [create_vm_scaleset]
#'
#' @examples
#'
#' # basic Linux (Ubuntu) and Windows configs
#' ubuntu_18.04_ss()
#' windows_2019_ss()
#'
#' # Windows DSVM scaleset, no load balancer and autoscaler
#' windows_dsvm_ss(load_balancer=NULL, autoscaler=NULL)
#'
#' # RHEL VM exposing ports 80 (HTTP) and 443 (HTTPS)
#' rhel_8_ss(nsg=nsg_config(nsg_rule_allow_http, nsg_rule_allow_https))
#'
#' # exposing no ports externally
#' rhel_8_ss(nsg=nsg_config(list()))
#'
#' # low-priority (spot) VMs, large scaleset (>100 instances allowed), no managed identity
#' rhel_8_ss(options=scaleset_options(priority="spot", large_scaleset=TRUE, managed_identity=FALSE))
#'
#'
#' \dontrun{
#'
#' # reusing existing resources: placing a scaleset in an existing vnet/subnet
#' # we don't need a new network security group either
#' vnet <- AzureRMR::get_azure_login()$
#' get_subscription("sub_id")$
#' get_resource_group("rgname")$
#' get_resource(type="Microsoft.Network/virtualNetworks", name="myvnet")
#'
#' ubuntu_18.04_ss(vnet=vnet, nsg=NULL)
#'
#' }
#' @export
vmss_config <- function(image, options=scaleset_options(),
datadisks=numeric(0),
nsg=nsg_config(),
vnet=vnet_config(),
load_balancer=lb_config(),
load_balancer_address=ip_config(),
autoscaler=autoscaler_config(),
other_resources=list(),
variables=list(),
...)
{
if(is.numeric(datadisks))
datadisks <- lapply(datadisks, datadisk_config)
stopifnot(inherits(image, "image_config"))
stopifnot(inherits(options, "scaleset_options"))
stopifnot(is.list(datadisks) && all(sapply(datadisks, inherits, "datadisk_config")))
# make IP sku, balancer sku and scaleset size consistent with each other
load_balancer <- vmss_fixup_lb(options, load_balancer)
ip <- vmss_fixup_ip(options, load_balancer, load_balancer_address)
obj <- list(
image=image,
options=options,
datadisks=datadisks,
nsg=nsg,
vnet=vnet,
lb=load_balancer,
ip=ip,
as=autoscaler,
other=other_resources,
variables=variables,
vmss_fields=list(...)
)
structure(obj, class="vmss_config")
}
vmss_fixup_lb <- function(options, lb)
{
# don't try to fix load balancer if not created here
if(is.null(lb) || !inherits(lb, "lb_config"))
return(lb)
# for a large scaleset, must set sku=standard
if(!options$params$singlePlacementGroup)
{
if(is.null(lb$type))
lb$type <- "standard"
else if(tolower(lb$type) != "standard")
stop("Load balancer type must be 'standard' for large scalesets", call.=FALSE)
}
else
{
if(is.null(lb$type))
lb$type <- "basic"
}
lb
}
vmss_fixup_ip <- function(options, lb, ip)
{
# IP address only required if load balancer is present
if(is.null(lb))
return(NULL)
# don't try to fix IP if load balancer was provided as a resource id
if(is.character(lb))
return(ip)
# don't try to fix IP if not created here
if(is.null(ip) || !inherits(ip, "ip_config"))
return(ip)
lb_type <- if(is_resource(lb))
lb$sku$name
else lb$type
# for a large scaleset, must set sku=standard, allocation=static
if(!options$params$singlePlacementGroup)
{
if(is.null(ip$type))
ip$type <- "standard"
else if(tolower(ip$type) != "standard")
stop("Load balancer IP address type must be 'standard' for large scalesets", call.=FALSE)
if(is.null(ip$dynamic))
ip$dynamic <- FALSE
else if(ip$dynamic)
stop("Load balancer dynamic IP address not supported for large scalesets", call.=FALSE)
}
else
{
# defaults for small scaleset: sku=load balancer sku, allocation=dynamic
if(is.null(ip$type))
ip$type <- lb_type
if(is.null(ip$dynamic))
ip$dynamic <- tolower(ip$type) == "basic"
}
# check consistency
if(tolower(ip$type) == "standard" && ip$dynamic)
stop("Standard IP address type does not support dynamic allocation", call.=FALSE)
ip
}
#' Virtual machine scaleset options
#'
#' @param keylogin Whether to use an SSH public key to login (TRUE) or a password (FALSE). Note that Windows does not support SSH key logins.
#' @param managed_identity Whether to provide a managed system identity for the VM.
#' @param public Whether the instances (nodes) of the scaleset should be visible from the public internet.
#' @param priority The priority of the VM scaleset, either `regular` or `spot`. Spot VMs are considerably cheaper but subject to eviction if other, higher-priority workloads require compute resources.
#' @param delete_on_evict If spot-priority VMs are being used, whether evicting (shutting down) a VM should delete it, as opposed to just deallocating it.
#' @param network_accel Whether to enable accelerated networking. This option is only available for certain VM sizes.
#' @param large_scaleset Whether to enable scaleset sizes > 100 instances.
#' @param overprovision Whether to overprovision the scaleset on creation.
#' @param upgrade_policy A list, giving the VM upgrade policy for the scaleset.
#' @param os_disk_type The type of primary disk for the VM. Change this to "StandardSSD_LRS" or "Standard_LRS" if the VM size doesn't support premium storage.
#'
#' @export
scaleset_options <- function(keylogin=TRUE, managed_identity=TRUE, public=FALSE,
priority=c("regular", "spot"), delete_on_evict=FALSE,
network_accel=FALSE, large_scaleset=FALSE,
overprovision=TRUE, upgrade_policy=list(mode="manual"),
os_disk_type=c("Premium_LRS", "StandardSSD_LRS", "Standard_LRS"))
{
params <- list(
priority=match.arg(priority),
evictionPolicy=if(delete_on_evict) "delete" else "deallocate",
enableAcceleratedNetworking=network_accel,
singlePlacementGroup=!large_scaleset,
overprovision=overprovision,
upgradePolicy=upgrade_policy
)
os_disk_type <- match.arg(os_disk_type)
out <- list(keylogin=keylogin, managed_identity=managed_identity, public=public, os_disk_type=os_disk_type, params=params)
structure(out, class="scaleset_options")
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/vmss_config.R |
add_template_parameters.vmss_config <- function(config, ...)
{
add_param <- function(...)
{
new_params <- lapply(list(...), function(obj) list(type=obj))
params <<- c(params, new_params)
}
params <- sstpl_parameters_default
if(config$options$keylogin)
add_param(sshKeyData="string")
else add_param(adminPassword="securestring")
if(inherits(config$image, "image_marketplace"))
add_param(imagePublisher="string", imageOffer="string", imageSku="string", imageVersion="string")
else add_param(imageId="string")
if(length(config$datadisks) > 0)
add_param(dataDisks="array", dataDiskResources="array")
params
}
add_template_variables.vmss_config <- function(config, ...)
{
vars <- list(
location="[resourceGroup().location]",
vmId="[resourceId('Microsoft.Compute/virtualMachineScalesets', parameters('vmName'))]",
vmRef="[concat('Microsoft.Compute/virtualMachineScalesets/', parameters('vmName'))]",
vmPrefix="[parameters('vmName')]"
)
for(res in c("nsg", "vnet", "lb", "ip", "as"))
vars <- c(vars, add_template_variables(config[[res]], res))
# add any extra variables provided by the user
utils::modifyList(vars, config$variables)
}
add_template_resources.vmss_config <- function(config, ...)
{
vmss <- vmss_default
# fixup VMSS properties
if(config$options$managed_identity)
vmss$identity <- list(type="systemAssigned")
# fixup VM properties
vm <- vmss$properties$virtualMachineProfile
n_disks <- length(config$datadisks)
n_disk_resources <- if(n_disks > 0)
sum(sapply(config$datadisks, function(x) !is.null(x$res_spec)))
else 0
if(n_disks > 0)
{
vm_datadisk[[1]]$input$managedDisk$id <- NULL
vm$storageProfile$copy <- vm_datadisk
if(n_disk_resources > 0)
vmss$dependsOn <- c(vmss$dependsOn, "managedDiskResources")
}
vm$osProfile <- c(vm$osProfile,
if(config$options$keylogin) vm_key_login else vm_pwd_login)
vm$storageProfile$imageReference <- if(inherits(config$image, "image_custom"))
list(id="[parameters('imageId')]")
else list(
publisher="[parameters('imagePublisher')]",
offer="[parameters('imageOffer')]",
sku="[parameters('imageSku')]",
version="[parameters('imageVersion')]"
)
if(config$options$params$priority == "low")
vm$evictionPolicy <- "[parameters('evictionPolicy')]"
if(!is_empty(config$lb))
{
vm$
networkProfile$
networkInterfaceConfigurations[[1]]$
properties$
ipConfigurations[[1]]$
properties$
loadBalancerBackendAddressPools <- list(list(id="[variables('lbBackendId')]")) # lol
}
if(config$options$public)
{
vm$
networkProfile$
networkInterfaceConfigurations[[1]]$
properties$
ipConfigurations[[1]]$
properties$
publicIpAddressConfiguration <- list(
name="pub1",
properties=list(idleTimeoutInMinutes=15)
)
}
vm$storageProfile$osDisk$managedDisk$storageAccountType <- config$options$os_disk_type
vmss$properties$virtualMachineProfile <- vm
if(!is_empty(config$vmss_fields))
vmss <- utils::modifyList(vmss, config$vmss_fields)
resources <- config[c("nsg", "vnet", "lb", "ip", "as")]
existing <- sapply(resources, existing_resource)
unused <- sapply(resources, is.null)
create <- !existing & !unused
# cannot use lapply(*, build_resource_fields) because of lapply wart
resources <- lapply(resources[create], function(x) build_resource_fields(x))
## fixup dependencies between resources
# vnet depends on nsg
# vmss depends on vnet, lb
if(create["vnet"])
{
if(!create["nsg"])
resources$vnet$dependsOn <- NULL
if(unused["nsg"])
resources$vnet$properties$subnets[[1]]$properties$networkSecurityGroup <- NULL
}
vmss_depends <- character()
if(create["lb"])
vmss_depends <- c(vmss_depends, "[variables('lbRef')]")
if(create["vnet"])
vmss_depends <- c(vmss_depends, "[variables('vnetRef')]")
vmss$dependsOn <- I(vmss_depends)
if(n_disk_resources > 0)
{
resources <- c(resources, list(disk_default))
if(n_disks > 0)
vmss$dependsOn <- c(vmss$dependsOn, "managedDiskResources")
}
resources <- c(resources, list(vmss))
if(!is_empty(config$other))
resources <- c(resources, lapply(config$other, function(x) build_resource_fields(x)))
unname(resources)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/vmss_template_builders.R |
#' Virtual network configuration
#'
#' @param address_space For `vnet_config`, the address range accessible by the virtual network, expressed in CIDR block format.
#' @param subnets For `vnet_config`, a list of subnet objects, each obtained via a call to `subnet_config`.
#' @param name For `subnet_config`, the name of the subnet. Duplicate names will automatically be disambiguated prior to VM deployment.
#' @param addresses For `subnet_config`, the address ranges spanned by this subnet. Must be a subset of the address space available to the parent virtual network.
#' @param nsg The network security group associated with this subnet. Defaults to the NSG created as part of this VM deployment.
#' @param ... Other named arguments that will be treated as resource properties.
#'
#' @seealso
#' [create_vm], [vm_config], [vmss_config]
#' @examples
#' vnet_config()
#' vnet_config(address_space="10.1.0.0/16")
#' vnet_config(subnets=list(
#' subnet_config("subnet", "10.0.0.0/24")
#' ))
#' @export
vnet_config <- function(address_space="10.0.0.0/16", subnets=list(subnet_config()), ...)
{
# attempt to fixup address blocks so they are consistent (should use iptools when it's fixed)
ab_regex <- "^([0-9]+\\.[0-9]+).*$"
ab_block <- sub(ab_regex, "\\1", address_space)
fixaddr <- function(addr)
{
if(sub(ab_regex, "\\1", addr) != ab_block)
sub("^[0-9]+\\.[0-9]+", ab_block, addr)
else addr
}
subnets <- lapply(subnets, function(sn)
{
if(!is_empty(sn$properties$addressPrefix))
sn$properties$addressPrefix <- fixaddr(sn$properties$addressPrefix)
if(!is_empty(sn$properties$addressPrefixes))
sn$properties$addressPrefixes <- sapply(sn$properties$addressPrefixes, fixaddr)
sn
})
# unique-ify subnet names
sn_names <- make.unique(sapply(subnets, `[[`, "name"))
for(i in seq_along(subnets))
subnets[[i]]$name <- sn_names[i]
props <- list(
addressSpace=list(addressPrefixes=I(address_space)),
subnets=subnets,
...
)
structure(list(properties=props), class="vnet_config")
}
build_resource_fields.vnet_config <- function(config, ...)
{
config$properties$subnets <- lapply(config$properties$subnets, unclass)
utils::modifyList(vnet_default, config)
}
add_template_variables.vnet_config <- function(config, ...)
{
name <- "[concat(parameters('vmName'), '-vnet')]"
id <- "[resourceId('Microsoft.Network/virtualNetworks', variables('vnetName'))]"
ref <- "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]"
subnet <- config$properties$subnets[[1]]$name
subnet_id <- "[concat(variables('vnetId'), '/subnets/', variables('subnet'))]"
list(vnetName=name, vnetId=id, vnetRef=ref, subnet=subnet, subnetId=subnet_id)
}
#' @rdname vnet_config
#' @export
subnet_config <- function(name="subnet", addresses="10.0.0.0/16", nsg="[variables('nsgId')]", ...)
{
properties <- if(length(addresses) < 2)
list(addressPrefix=addresses, ...)
else list(addressPrefixes=addresses, ...)
# check if supplied a network security group resource ID or object
if(is.character(nsg))
properties$networkSecurityGroup$id <- nsg
else if(is_resource(nsg) && tolower(nsg$type) == "microsoft.network/networksecuritygroups")
properties$networkSecurityGroup$id <- nsg$id
else if(!is.null(nsg))
warning("Invalid network security group", call.=FALSE)
subnet <- list(name=name, properties=properties)
structure(subnet, class="subnet_config")
}
| /scratch/gouwar.j/cran-all/cranData/AzureVM/R/vnet_config.R |
---
title: "Introduction to AzureVM"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AzureVM}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
AzureVM is a package for interacting with virtual machines and virtual machine scalesets in Azure. You can deploy, start up, shut down, run scripts, deallocate and delete VMs and scalesets from the R command line. It uses the tools provided by the [AzureRMR package](https://github.com/Azure/AzureRMR) to manage VM resources and templates.
## Virtual machines
Here is a simple example. We create a VM using the default settings, run a shell command, resize the VM, and then delete it.
```r
library(AzureVM)
sub <- AzureRMR::get_azure_login()$
get_subscription("sub_id")
# calling create_vm() from a subscription object will create the VM in its own resource group
# default is an Ubuntu 18.04 VM, size Standard_DS3_v2, login via SSH key
# call sub$list_vm_sizes() to get the sizes available in your region
vm <- sub$create_vm("myubuntuvm", user_config("myname", "~/.ssh/id_rsa.pub"),
location="australiaeast")
# some resources used by the VM
vm$get_vnet()
vm$get_public_ip_resource()
vm$get_disk("os")
# run a shell script or command remotely (will be PowerShell on a Windows VM)
vm$run_script("echo hello world! > /tmp/hello.txt")
# ... and stop it
vm$stop()
# ... and resize it
vm$resize("Standard_DS4_v2")
# ... and delete it (this can be done asynchronously for a VM in its own group)
vm$delete()
```
AzureVM comes with a number of predefined configurations, for deploying commonly used VM images. For example, to create an Ubuntu DSVM accessible via SSH, JupyterHub and RStudio Server:
```r
sub$create_vm("mydsvm", user_config("myname", "~/.ssh/id_rsa.pub"), config="ubuntu_dsvm",
location="australiaeast")
```
And to create a Windows Server 2019 VM, accessible via RDP:
```r
sub$create_vm("mywinvm", user_config("myname", password="Use-strong-passwords!"), config="windows_2019",
location="australiaeast")
```
The available predefined configurations are `ubuntu_18.04` (the default), `ubuntu_16.04`, `ubuntu_dsvm`, `windows_2019`, `windows_2016`, `windows_dsvm`, `rhel_7.6`, `rhel_8`, `centos_7.5`, `centos_7.6`, `debian_8_backports` and `debian_9_backports`. You can combine these with several other arguments to customise the VM deployment to your needs:
- `size`: VM size. Use the `list_vm_sizes` method for the subscription and resource group classes to see the available sizes.
- `datadisks`: The data disk sizes/configurations to attach.
- `ip`: Public IP address. Set this to NULL if you don't want the VM to be accessible outside its subnet.
- `vnet`: Virtual network/subnet.
- `nsg`: Network security group. AzureVM will associate the NSG with the vnet/subnet, not with the VM's network interface.
- `nic`: Network interface.
- `other_resources`: Optionally, a list of other resources to deploy.
```r
# Windows Server 2016, with a 500GB datadisk attached, not publicly accessible
sub$create_vm("mywinvm2", user_config("myname", password="Use-strong-passwords!"),
size="Standard_DS4_v2", config="windows_2016", datadisks=500, ip=NULL,
location="australiaeast")
# Ubuntu DSVM, GPU-enabled
sub$create_vm("mydsvm", user_config("myname", "~/.ssh/id_rsa.pub"), size="Standard_NC12",
config="ubuntu_dsvm_ss",
location="australiaeast")
# Red Hat VM, serving HTTP/HTTPS
sub$create_vm("myrhvm", user_config("myname", "~/.ssh/id_rsa.pub"), config="rhel_8",
nsg=nsg_config(list(nsg_rule_allow_http, nsg_rule_allow_https)),
location="australiaeast")
```
Full customisation is provided by the `vm_config` function, which also lets you specify the image to deploy, either from the marketplace or a disk. (The predefined configurations actually call `vm_config`, with the appropriate arguments for each specific config.)
```r
## custom VM configuration: Windows 10 Pro 1903 with data disks
## this assumes you have a valid Win10 desktop license
user <- user_config("myname", password="Use-strong-passwords!")
image <- image_config(
publisher="MicrosoftWindowsDesktop",
offer="Windows-10",
sku="19h1-pro"
)
datadisks <- list(
datadisk_config(250, type="Premium_LRS"),
datadisk_config(1000, type="Standard_LRS")
)
nsg <- nsg_config(
list(nsg_rule_allow_rdp)
)
sub$create_vm("mywin10vm", user,
config=vm_config(
image=image,
keylogin=FALSE,
datadisks=datadisks,
nsg=nsg,
properties=list(licenseType="Windows_Client")
),
location="australiaeast"
)
```
## VM scalesets
The equivalent to `create_vm` for scalesets is the `create_vm_scaleset` method. By default, a new scaleset will come with a load balancer and autoscaler attached, but its instances will not be externally accessible.
```r
# default is Ubuntu 18.04 scaleset, size Standard_DS1_v2
sub$create_vm_scaleset("myubuntuss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
location="australiaeast")
```
Each predefined VM configuration has a corresponding scaleset configuration. To specify low-level scaleset options, use the `scaleset_options` function. Here are some sample scaleset deployments:
```r
# Windows Server 2019
sub$create_vm_scaleset("mywinss", user_config("myname", password="Use-strong-passwords!"), instances=5,
config="windows_2019_ss",
location="australiaeast")
# RHEL scaleset, serving HTTP/HTTPS
sub$create_vm_scaleset("myrhelss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
config="rhel_8_ss",
nsg=nsg_config(list(nsg_rule_allow_http, nsg_rule_allow_https)),
location="australiaeast")
# Ubuntu DSVM, GPU-enabled, public instances, no load balancer or autoscaler
sub$create_vm_scaleset("mydsvmss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
size="Standard_NC6", config="ubuntu_dsvm_ss",
options=scaleset_options(public=TRUE),
load_balancer=NULL, autoscaler=NULL,
location="australiaeast")
# Large Debian scaleset (multiple placement groups), using spot VMs (low-priority)
# need to set the instance size to something that supports low-pri
sub$create_vm_scaleset("mylargess", user_config("myname", "~/.ssh/id_rsa.pub"), instances=10,
size="Standard_DS3_v2", config="debian_9_backports_ss",
options=scaleset_options(priority="spot", large_scaleset=TRUE),
location="australiaeast")
```
Working with scaleset instances can be tedious if you have a large scaleset, since R can only connect to one instance at a time. To solve this problem, AzureVM can leverage the process pool functionality supplied by AzureRMR to connect in parallel with the scaleset, leading to significant speedups. The pool is created automatically the first time it is needed, and is deleted at the end of the session.
```r
# this will create a pool of up to 10 processes that talk to the scaleset
mylargess$run_script("echo hello world! > /tmp/hello.txt")
```
You can control the size of the pool with the global `azure_vm_minpoolsize` and `azure_vm_maxpoolsize` options, which have default values 2 and 10 respectively. To turn off parallel connections, set `options(azure_vm_maxpoolsize=0)`. Note that the pool size is unrelated to the _scaleset_ size; it only controls how many instances can communicate with AzureVM simultaneously.
## Sharing resources
You can also include an existing Azure resource in a deployment, by supplying an AzureRMR `az_resource` object as an argument in the `create_vm` or `create_vm_scaleset` call. For example, here we create a VM and a scaleset that share a single virtual network/subnet.
```r
## VM and scaleset in the same resource group and virtual network
# first, create the resgroup
rg <- sub$create_resource_group("rgname", "australiaeast")
# create the master
rg$create_vm("mastervm", user_config("myname", "~/.ssh/id_rsa.pub"))
# get the vnet resource
vnet <- rg$get_resource(type="Microsoft.Network/virtualNetworks", name="mastervm-vnet")
# create the scaleset
# since the NSG is associated with the vnet, we don't need to create a new NSG either
rg$create_vm_scaleset("slavess", user_config("myname", "~/.ssh/id_rsa.pub"),
instances=5, vnet=vnet, nsg=NULL)
```
| /scratch/gouwar.j/cran-all/cranData/AzureVM/inst/doc/intro.rmd |
---
title: "Introduction to AzureVM"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Introduction to AzureVM}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
AzureVM is a package for interacting with virtual machines and virtual machine scalesets in Azure. You can deploy, start up, shut down, run scripts, deallocate and delete VMs and scalesets from the R command line. It uses the tools provided by the [AzureRMR package](https://github.com/Azure/AzureRMR) to manage VM resources and templates.
## Virtual machines
Here is a simple example. We create a VM using the default settings, run a shell command, resize the VM, and then delete it.
```r
library(AzureVM)
sub <- AzureRMR::get_azure_login()$
get_subscription("sub_id")
# calling create_vm() from a subscription object will create the VM in its own resource group
# default is an Ubuntu 18.04 VM, size Standard_DS3_v2, login via SSH key
# call sub$list_vm_sizes() to get the sizes available in your region
vm <- sub$create_vm("myubuntuvm", user_config("myname", "~/.ssh/id_rsa.pub"),
location="australiaeast")
# some resources used by the VM
vm$get_vnet()
vm$get_public_ip_resource()
vm$get_disk("os")
# run a shell script or command remotely (will be PowerShell on a Windows VM)
vm$run_script("echo hello world! > /tmp/hello.txt")
# ... and stop it
vm$stop()
# ... and resize it
vm$resize("Standard_DS4_v2")
# ... and delete it (this can be done asynchronously for a VM in its own group)
vm$delete()
```
AzureVM comes with a number of predefined configurations, for deploying commonly used VM images. For example, to create an Ubuntu DSVM accessible via SSH, JupyterHub and RStudio Server:
```r
sub$create_vm("mydsvm", user_config("myname", "~/.ssh/id_rsa.pub"), config="ubuntu_dsvm",
location="australiaeast")
```
And to create a Windows Server 2019 VM, accessible via RDP:
```r
sub$create_vm("mywinvm", user_config("myname", password="Use-strong-passwords!"), config="windows_2019",
location="australiaeast")
```
The available predefined configurations are `ubuntu_18.04` (the default), `ubuntu_16.04`, `ubuntu_dsvm`, `windows_2019`, `windows_2016`, `windows_dsvm`, `rhel_7.6`, `rhel_8`, `centos_7.5`, `centos_7.6`, `debian_8_backports` and `debian_9_backports`. You can combine these with several other arguments to customise the VM deployment to your needs:
- `size`: VM size. Use the `list_vm_sizes` method for the subscription and resource group classes to see the available sizes.
- `datadisks`: The data disk sizes/configurations to attach.
- `ip`: Public IP address. Set this to NULL if you don't want the VM to be accessible outside its subnet.
- `vnet`: Virtual network/subnet.
- `nsg`: Network security group. AzureVM will associate the NSG with the vnet/subnet, not with the VM's network interface.
- `nic`: Network interface.
- `other_resources`: Optionally, a list of other resources to deploy.
```r
# Windows Server 2016, with a 500GB datadisk attached, not publicly accessible
sub$create_vm("mywinvm2", user_config("myname", password="Use-strong-passwords!"),
size="Standard_DS4_v2", config="windows_2016", datadisks=500, ip=NULL,
location="australiaeast")
# Ubuntu DSVM, GPU-enabled
sub$create_vm("mydsvm", user_config("myname", "~/.ssh/id_rsa.pub"), size="Standard_NC12",
config="ubuntu_dsvm_ss",
location="australiaeast")
# Red Hat VM, serving HTTP/HTTPS
sub$create_vm("myrhvm", user_config("myname", "~/.ssh/id_rsa.pub"), config="rhel_8",
nsg=nsg_config(list(nsg_rule_allow_http, nsg_rule_allow_https)),
location="australiaeast")
```
Full customisation is provided by the `vm_config` function, which also lets you specify the image to deploy, either from the marketplace or a disk. (The predefined configurations actually call `vm_config`, with the appropriate arguments for each specific config.)
```r
## custom VM configuration: Windows 10 Pro 1903 with data disks
## this assumes you have a valid Win10 desktop license
user <- user_config("myname", password="Use-strong-passwords!")
image <- image_config(
publisher="MicrosoftWindowsDesktop",
offer="Windows-10",
sku="19h1-pro"
)
datadisks <- list(
datadisk_config(250, type="Premium_LRS"),
datadisk_config(1000, type="Standard_LRS")
)
nsg <- nsg_config(
list(nsg_rule_allow_rdp)
)
sub$create_vm("mywin10vm", user,
config=vm_config(
image=image,
keylogin=FALSE,
datadisks=datadisks,
nsg=nsg,
properties=list(licenseType="Windows_Client")
),
location="australiaeast"
)
```
## VM scalesets
The equivalent to `create_vm` for scalesets is the `create_vm_scaleset` method. By default, a new scaleset will come with a load balancer and autoscaler attached, but its instances will not be externally accessible.
```r
# default is Ubuntu 18.04 scaleset, size Standard_DS1_v2
sub$create_vm_scaleset("myubuntuss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
location="australiaeast")
```
Each predefined VM configuration has a corresponding scaleset configuration. To specify low-level scaleset options, use the `scaleset_options` function. Here are some sample scaleset deployments:
```r
# Windows Server 2019
sub$create_vm_scaleset("mywinss", user_config("myname", password="Use-strong-passwords!"), instances=5,
config="windows_2019_ss",
location="australiaeast")
# RHEL scaleset, serving HTTP/HTTPS
sub$create_vm_scaleset("myrhelss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
config="rhel_8_ss",
nsg=nsg_config(list(nsg_rule_allow_http, nsg_rule_allow_https)),
location="australiaeast")
# Ubuntu DSVM, GPU-enabled, public instances, no load balancer or autoscaler
sub$create_vm_scaleset("mydsvmss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
size="Standard_NC6", config="ubuntu_dsvm_ss",
options=scaleset_options(public=TRUE),
load_balancer=NULL, autoscaler=NULL,
location="australiaeast")
# Large Debian scaleset (multiple placement groups), using spot VMs (low-priority)
# need to set the instance size to something that supports low-pri
sub$create_vm_scaleset("mylargess", user_config("myname", "~/.ssh/id_rsa.pub"), instances=10,
size="Standard_DS3_v2", config="debian_9_backports_ss",
options=scaleset_options(priority="spot", large_scaleset=TRUE),
location="australiaeast")
```
Working with scaleset instances can be tedious if you have a large scaleset, since R can only connect to one instance at a time. To solve this problem, AzureVM can leverage the process pool functionality supplied by AzureRMR to connect in parallel with the scaleset, leading to significant speedups. The pool is created automatically the first time it is needed, and is deleted at the end of the session.
```r
# this will create a pool of up to 10 processes that talk to the scaleset
mylargess$run_script("echo hello world! > /tmp/hello.txt")
```
You can control the size of the pool with the global `azure_vm_minpoolsize` and `azure_vm_maxpoolsize` options, which have default values 2 and 10 respectively. To turn off parallel connections, set `options(azure_vm_maxpoolsize=0)`. Note that the pool size is unrelated to the _scaleset_ size; it only controls how many instances can communicate with AzureVM simultaneously.
## Sharing resources
You can also include an existing Azure resource in a deployment, by supplying an AzureRMR `az_resource` object as an argument in the `create_vm` or `create_vm_scaleset` call. For example, here we create a VM and a scaleset that share a single virtual network/subnet.
```r
## VM and scaleset in the same resource group and virtual network
# first, create the resgroup
rg <- sub$create_resource_group("rgname", "australiaeast")
# create the master
rg$create_vm("mastervm", user_config("myname", "~/.ssh/id_rsa.pub"))
# get the vnet resource
vnet <- rg$get_resource(type="Microsoft.Network/virtualNetworks", name="mastervm-vnet")
# create the scaleset
# since the NSG is associated with the vnet, we don't need to create a new NSG either
rg$create_vm_scaleset("slavess", user_config("myname", "~/.ssh/id_rsa.pub"),
instances=5, vnet=vnet, nsg=NULL)
```
| /scratch/gouwar.j/cran-all/cranData/AzureVM/vignettes/intro.rmd |
metadata_host <- httr::parse_url("http://169.254.169.254")
inst_api_version <- "2019-02-01"
att_api_version <- "2018-10-01"
ev_api_version <- "2017-11-01"
#' Metadata for an Azure VM
#'
#' @param nonce For `update_attested_metadata`, an optional string to use as a nonce.
#' @details
#' The `instance`, `attested` and `events` environments contain the instance metadata, attested metadata, and scheduled events respectively for a VM running in Azure. `instance` and `attested` are automatically populated when you load the AzureVMmetadata package, or you can manually populate them yourself with the `update_instance_metadata` and `update_attested_metadata` functions. `events` is not populated at package startup, because calling the scheduled event service can require up to several minutes if it is not running already. You can manually populate it with the `update_scheduled_events` function.
#'
#' If AzureVMmetadata is loaded in an R session that is _not_ running in an Azure VM, all the metadata environments will be empty.
#'
#' @return
#' The updating functions return the contents of their respective environments as lists, invisibly.
#' @format
#' `instance`, `attested` and `events` are environments.
#' @seealso
#' [in_azure_vm]
#'
#' [Instance metadata service documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service)
#'
#' To obtain OAuth tokens from the metadata service, see [AzureAuth::get_managed_token]
#'
#' @examples
#'
#' ## these will only be meaningful when run in an Azure VM
#'
#' # all compute metadata
#' AzureVMmetadata::instance$compute
#'
#' # VM name and ID
#' AzureVMmetadata::instance$compute$name
#' AzureVMmetadata::instance$compute$vmId
#'
#' # VM resource details: subscription, resource group, resource ID
#' AzureVMmetadata::instance$compute$subscriptionId
#' AzureVMmetadata::instance$compute$resourceGroupName
#' AzureVMmetadata::instance$compute$resourceId
#'
#' # all network metadata
#' AzureVMmetadata::instance$network
#'
#' # IPv4 address details (1st network interface)
#' AzureVMmetadata::instance$network$interface[[1]]$ipv4
#'
#' @rdname metadata
#' @export
instance <- new.env()
#' @rdname metadata
#' @export
attested <- new.env()
#' @rdname metadata
#' @export
events <- new.env()
#' @rdname metadata
#' @export
update_instance_metadata <- function()
{
metadata_host$path <- "metadata/instance"
metadata_host$query <- list(`api-version`=att_api_version)
res <- try(httr::GET(metadata_host, httr::add_headers(metadata=TRUE)), silent=TRUE)
if(!inherits(res, "response") || res$status_code > 299)
return(invisible(NULL))
inst <- httr::content(res)
for(x in names(inst))
instance[[x]] <- inst[[x]]
invisible(inst)
}
#' @rdname metadata
#' @export
update_attested_metadata <- function(nonce=NULL)
{
metadata_host$path <- "metadata/attested/document"
metadata_host$query <- list(`api-version`=att_api_version, nonce=nonce)
res <- try(httr::GET(metadata_host, httr::add_headers(metadata=TRUE)), silent=TRUE)
if(!inherits(res, "response") || res$status_code > 299)
return(invisible(NULL))
att <- httr::content(res)
for(x in names(att))
attested[[x]] <- att[[x]]
invisible(att)
}
#' @rdname metadata
#' @export
update_scheduled_events <- function()
{
metadata_host$path <- "metadata/scheduledevents"
metadata_host$query <- list(`api-version`=ev_api_version)
res <- try(httr::GET(metadata_host, httr::add_headers(metadata=TRUE)), silent=TRUE)
if(!inherits(res, "response") || res$status_code > 299)
return(invisible(NULL))
ev <- httr::content(res)
for(x in names(ev))
events[[x]] <- ev[[x]]
invisible(ev)
}
#' Check if R is running in an Azure VM
#' @param nonce An optional string to use as a nonce.
#' @details
#' These functions check if R is running in an Azure VM by attempting to contact the instance metadata host. `in_azure_vm` simply returns TRUE or FALSE based on whether it succeeds. `get_vm_cert` provides a stronger check, by retrieving the VM's certificate and throwing an error if this is not found. Note that you should still verify the certificate's authenticity before relying on it.
#' @return
#' For `in_azure_vm`, a boolean. For `get_vm_cert`, a PKCS-7 certificate object.
#' @export
in_azure_vm <- function()
{
obj <- try(httr::GET(metadata_host), silent=TRUE)
inherits(obj, "response") && httr::status_code(obj) == 400
}
#' @rdname in_azure_vm
#' @export
get_vm_cert <- function(nonce=NULL)
{
update_attested_metadata(nonce)
if(is.null(attested$signature))
stop("No certificate found", call.=FALSE)
openssl::read_p7b(openssl::base64_decode(attested$signature))[[1]]
}
.onLoad <- function(libname, pkgname)
{
update_instance_metadata()
update_attested_metadata()
}
| /scratch/gouwar.j/cran-all/cranData/AzureVMmetadata/R/AzureVMmetadata.R |
#' @import AzureRMR
#' @import AzureCognitive
NULL
#' @export
AzureCognitive::cognitive_endpoint
#' @export
AzureCognitive::call_cognitive_endpoint
utils::globalVariables("id")
.onLoad <- function(libname, pkgname)
{
options(azure_computervision_api_version="v2.1")
options(azure_customvision_training_api_version="v3.1")
options(azure_customvision_prediction_api_version="v3.0")
}
confirm_delete <- function(msg, confirm)
{
if(!interactive() || !confirm)
return(TRUE)
ok <- if(getRversion() < numeric_version("3.5.0"))
{
msg <- paste(msg, "(yes/No/cancel) ")
yn <- readline(msg)
if(nchar(yn) == 0)
FALSE
else tolower(substr(yn, 1, 1)) == "y"
}
else utils::askYesNo(msg, FALSE)
isTRUE(ok)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/AzureVision.R |
#' @export
print.customvision_project <- function(x, ...)
{
cat("Azure Custom Vision project '", x$project$name, "' (", x$project$id, ")\n", sep="")
cat(" Endpoint:", httr::build_url(x$endpoint$url), "\n")
domain_id <- x$project$settings$domainId
compact <- is_compact_domain(domain_id)
domains <- if(compact) unlist(.compact_domain_ids) else unlist(.domain_ids)
domain_name <- names(domains)[domains == domain_id]
if(compact)
domain_name <- paste0(domain_name, ".compact")
domain_name <- paste0(domain_name, " (", domain_id, ")")
cat(" Domain:", domain_name, "\n")
export_type <- if(!compact)
"none"
else if(is_empty(x$project$settings$targetExportPlatforms))
"standard"
else "Vision AI Dev Kit"
cat(" Export target:", export_type, "\n")
classtype <- if(get_purpose_from_domain_id(domain_id) == "object_detection")
NA_character_
else x$project$settings$classificationType
cat(" Classification type:", classtype, "\n")
invisible(x)
}
#' Create, retrieve, update and delete Azure Custom Vision projects
#'
#' @param endpoint A custom vision endpoint.
#' @param object For `delete_customvision_project`, either an endpoint, or a project object.
#' @param name,id The name and ID of the project. At least one of these must be specified for `get_project`, `update_project` and `delete_project`. The name is required for `create_project` (the ID will be assigned automatically).
#' @param domain What kinds of images the model is meant to apply to. The default "general" means the model is suitable for use in a generic setting. Other, more specialised domains for classification include "food", "landmarks" and "retail"; for object detection the other possible domain is "logo".
#' @param export_target What formats are supported when exporting the model.
#' @param multiple_tags For classification models, Whether multiple categories (tags/labels) for an image are allowed. The default is `FALSE`, meaning an image represents one and only one category. Ignored for object detection models.
#' @param description An optional text description of the project.
#' @param ... Further arguments passed to lower-level methods.
#' @details
#' A Custom Vision project contains the metadata for a model: its intended purpose (classification vs object detection), the domain, the set of training images, and so on. Once you have created a project, you upload images to it, and train models based on those images. A trained model can then be published as a predictive service, or exported for standalone use.
#'
#' By default, a Custom Vision project does not support exporting the model; this allows it to be more complex, and thus potentially more accurate. Setting `export_target="standard"` enables exporting to the following formats:
#' - ONNX 1.2
#' - CoreML, for iOS 11 devices
#' - TensorFlow
#' - TensorFlow Lite, for Android devices
#' - A Docker image for the Windows, Linux or Raspberry Pi 3 (ARM) platform
#'
#' Setting `export_target="vaidk"` allows exporting to Vision AI Development Kit format, in addition to the above.
#' @return
#' `delete_project` returns NULL invisibly, on a successful deletion. The others return an object of class `customvision_project`.
#' @seealso
#' [`customvision_training_endpoint`], [`add_images`], [`train_model`], [`publish_model`], [`predict.customvision_model`], [`do_training_op`]
#'
#' - [CustomVision.ai](https://www.customvision.ai/): An interactive site for building Custom Vision models, provided by Microsoft
#' - [Training API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b7fa)
#' - [Prediction API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)
#' @examples
#' \dontrun{
#'
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#'
#' create_classification_project(endp, "myproject")
#' create_classification_project(endp, "mymultilabelproject", multiple_tags=TRUE)
#' create_object_detection_project(endp, "myobjdetproj")
#'
#' create_classification_project(endp, "mystdproject", export_target="standard")
#'
#' list_projects(endp)
#'
#' get_project(endp, "myproject")
#'
#' update_project(endp, "myproject", export_target="vaidk")
#'
#' }
#' @aliases customvision_project
#' @rdname customvision_project
#' @export
create_classification_project <- function(endpoint, name,
domain="general",
export_target=c("none", "standard", "vaidk"),
multiple_tags=FALSE,
description=NULL)
{
export_target <- match.arg(export_target)
create_project(endpoint, name, domain, export_target, multiple_tags, description,
purpose="classification")
}
#' @rdname customvision_project
#' @export
create_object_detection_project <- function(endpoint, name,
domain="general",
export_target=c("none", "standard", "vaidk"),
description=NULL)
{
export_target <- match.arg(export_target)
create_project(endpoint, name, domain, export_target, multiple_tags=FALSE, description,
purpose="object_detection")
}
create_project <- function(endpoint, name,
domain="general",
export_target=c("none", "standard", "vaidk"),
multiple_tags=FALSE,
description=NULL,
purpose=c("classification", "object_detection"))
{
purpose <- match.arg(purpose)
export_target <- match.arg(export_target)
domain_id <- get_domain_id(domain, purpose, export_target)
type <- if(purpose == "object_detection")
NULL
else if(multiple_tags)
"multilabel"
else "multiclass"
opts <- list(
name=name,
domainId=domain_id,
classificationType=type,
description=description
)
obj <- call_cognitive_endpoint(endpoint, "training/projects", options=opts, http_verb="POST")
# if export target is Vision AI Dev Kit, must do a separate update
if(export_target == "vaidk")
return(update_project(endpoint, id=obj$id, export_target="VAIDK"))
make_customvision_project(obj, endpoint)
}
#' @rdname customvision_project
#' @export
list_projects <- function(endpoint)
{
lst <- named_list(call_cognitive_endpoint(endpoint, "training/projects"))
sapply(lst, make_customvision_project, endpoint=endpoint, simplify=FALSE)
}
#' @rdname customvision_project
#' @export
get_project <- function(endpoint, name=NULL, id=NULL)
{
if(is.null(id))
id <- get_project_id_by_name(endpoint, name)
obj <- call_cognitive_endpoint(endpoint, file.path("training/projects", id))
make_customvision_project(obj, endpoint)
}
#' @rdname customvision_project
#' @export
update_project <- function(endpoint, name=NULL, id=NULL,
domain="general",
export_target=c("none", "standard", "vaidk"),
multiple_tags=FALSE,
description=NULL)
{
if(is.null(id))
id <- get_project_id_by_name(endpoint, name)
project <- get_project(endpoint, id=id)
newbody <- list()
if(!is.null(name) && name != project$name)
newbody$name <- name
if(!missing(description))
newbody$description <- description
newbody$settings <- project$settings
newtarget <- !missing(export_target)
newdomain <- !missing(domain)
newclasstype <- !missing(multiple_tags)
export_target <- if(newtarget)
match.arg(export_target)
else if(!is_compact_domain(project$settings$domainId))
"none"
else if(is_empty(project$settings$targetExportPlatforms))
"standard"
else "vaidk"
if(newtarget || newdomain)
{
purpose <- get_purpose_from_domain_id(project$settings$domainId)
newbody$settings$domainId <- get_domain_id(domain, purpose, export_target)
}
if(newclasstype)
newbody$settings$classificationType <- if(multiple_tags) "Multilabel" else "Multiclass"
if(export_target == "vaidk")
newbody$settings$targetExportPlatforms <- I("VAIDK")
obj <- call_cognitive_endpoint(endpoint, file.path("training/projects", id), body=newbody, http_verb="PATCH")
make_customvision_project(obj, endpoint)
}
#' @rdname customvision_project
#' @export
delete_project <- function(object, ...)
{
UseMethod("delete_project")
}
#' @export
delete_project.customvision_training_endpoint <- function(object, name=NULL, id=NULL, confirm=TRUE, ...)
{
if(is.null(id))
id <- get_project_id_by_name(object, name)
msg <- sprintf("Are you sure you really want to delete the project '%s'?", if(!is.null(name)) name else id)
if(!confirm_delete(msg, confirm))
return(invisible(NULL))
call_cognitive_endpoint(object, file.path("training/projects", id), http_verb="DELETE")
invisible(NULL)
}
#' @export
delete_project.customvision_project <- function(object, confirm=TRUE, ...)
{
name <- object$project$name
id <- object$project$id
msg <- sprintf("Are you sure you really want to delete the project '%s'?", name)
if(!confirm_delete(msg, confirm))
return(invisible(NULL))
call_cognitive_endpoint(object$endpoint, file.path("training/projects", id), http_verb="DELETE")
invisible(NULL)
}
.domain_ids <- list(
classification=c(
general="ee85a74c-405e-4adc-bb47-ffa8ca0c9f31",
food="c151d5b5-dd07-472a-acc8-15d29dea8518",
landmarks="ca455789-012d-4b50-9fec-5bb63841c793",
retail="b30a91ae-e3c1-4f73-a81e-c270bff27c39"
),
object_detection=c(
general="da2e3a8a-40a5-4171-82f4-58522f70fbc1",
logo="1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4"
)
)
.compact_domain_ids <- list(
classification=c(
general="0732100f-1a38-4e49-a514-c9b44c697ab5",
food="8882951b-82cd-4c32-970b-d5f8cb8bf6d7",
landmarks="b5cfd229-2ac7-4b2b-8d0a-2b0661344894",
retail="6b4faeda-8396-481b-9f8b-177b9fa3097f"
),
object_detection=c(
general="a27d5ca5-bb19-49d8-a70a-fec086c47f5b"
)
)
get_domain_id <- function(domain, purpose, export_target)
{
domainlst <- if(export_target == "none") .domain_ids else .compact_domain_ids
ids <- domainlst[[purpose]]
i <- which(domain == names(ids))
if(is_empty(i))
stop(sprintf("Domain '%s' not found", domain), call.=FALSE)
ids[i]
}
get_purpose_from_domain_id <- function(id)
{
domainlst <- if(is_compact_domain(id)) .compact_domain_ids else .domain_ids
i <- which(sapply(domainlst, function(domains) id %in% domains))
names(domainlst)[i]
}
is_compact_domain <- function(id)
{
id %in% unlist(.compact_domain_ids)
}
is_classification_project <- function(project)
{
domain_id <- project$settings$domainId
domains <- if(is_compact_domain(domain_id)) unlist(.compact_domain_ids) else unlist(.domain_ids)
domain_name <- names(domains)[domains == domain_id]
substr(domain_name, 1, 5) == "class"
}
get_project_id_by_name <- function(endpoint, name=NULL)
{
if(is.null(name))
stop("Either name or ID must be supplied", call.=FALSE)
lst <- list_projects(endpoint)
i <- which(sapply(lst, function(obj) obj$project$name == name))
if(is_empty(i))
stop(sprintf("Project '%s' not found", name), call.=FALSE)
lst[[i]]$project$id
}
make_customvision_project <- function(object, endpoint)
{
projclass <- if(is_classification_project(object))
"classification_project"
else "object_detection_project"
classes <- c(projclass, "customvision_project")
if(projclass == "classification_project")
classes <- c(paste0(tolower(object$settings$classificationType), "_project"), classes)
structure(list(endpoint=endpoint, project=object), class=classes)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision.R |
#' Add, list and remove images for a project
#'
#' @param project A Custom Vision project.
#' @param images For `add_images`, the images to add (upload) to the project.
#' @param image_ids For `remove_images`, the IDs of the images to remove from the project.
#' @param tags Optional tags to add to the images. Only for classification projects.
#' @param regions Optional list of regions in the images that contain objects. Only for object detection projects.
#' @param include For `list_images`, which images to include in the list: untagged, tagged, or both (the default).
#' @param as For `list_images`, the return value: a vector of image IDs, a data frame of image metadata, or a list of metadata.
#' @param iteration For `list_images`, the iteration ID (roughly, which model generation to use). Defaults to the latest iteration.
#' @param confirm For `remove_images`, whether to ask for confirmation first.
#' @param ... Arguments passed to lower-level functions.
#' @details
#' The images to be uploaded can be specified as:
#' - A vector of local filenames. JPG, PNG and GIF file formats are supported.
#' - A vector of publicly accessible URLs.
#' - A raw vector, or a list of raw vectors, holding the binary contents of the image files.
#'
#' Uploaded images can also have _tags_ added (for a classification project) or _regions_ (for an object detection project). Classification tags can be specified in the following ways:
#' - For a regular classification project (one tag per image), as a vector of strings. The tags will be applied to the images in order. If the length of the vector is 1, it will be recycled to the length of `image_ids`.
#' - For a multilabel classification project (multiple tags per image), as a _list_ of vectors of strings. Each vector in the list contains the tags to be assigned to the corresponding image. If the length of the list is 1, it will be recycled to the length of `image_ids`.
#'
#' If the length of the vector is 1, it will be recycled to the length of `image_ids`.
#'
#' Object detection projects also have tags, but they are specified as part of the `regions` argument. The regions to add should be specified as a list of data frames, with one data frame per image. Each data frame should have one row per region, and the following columns:
#' - `left`, `top`, `width`, `height`: the location and dimensions of the region bounding box, normalised to be between 0 and 1.
#' - `tag`: the name of the tag to associate with the region.
#'
#' Any other columns in the data frame will be ignored. If the length of the list is 1, it will be recycled to the length of `image_ids`.
#'
#' Note that once uploaded, images are identified only by their ID; there is no general link back to the source filename or URL. If you don't include tags or regions in the `add_images` call, be sure to save the returned IDs and then call [`add_image_tags`] or [`add_image_regions`] as appropriate.
#' @return
#' For `add_images`, the vector of IDs of the uploaded images.
#'
#' For `list_images`, based on the value of the `as` argument. The default is a vector of image IDs; `as="list"` returns a (nested) list of image metadata with one component per image; and `as="dataframe"` returns the same metadata but reshaped into a data frame.
#' @seealso
#' [`add_image_tags`] and [`add_image_regions`] to add tags and regions to images, if not done at upload time
#'
#' [`add_tags`], [`list_tags`], [`remove_tags`]
#'
#' [`customvision_project`]
#' @examples
#' \dontrun{
#'
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#'
#' # classification
#' proj1 <- create_classification_project(endp, "myproject")
#' list_images(proj1)
#' imgs <- dir("path/to/images", full.names=TRUE)
#'
#' # recycling: apply one tag to all images
#' add_images(proj1, imgs, tags="mytag")
#' list_images(proj1, include="tagged", as="dataframe")
#'
#' # different tags per image
#' add_images(proj1, c("cat.jpg", "dog.jpg", tags=c("cat", "dog"))
#'
#' # adding online images
#' host <- "https://mysite.example.com/"
#' img_urls <- paste0(host, c("img1.jpg", "img2.jpg", "img3.jpg"))
#' add_images(proj1, img_urls, tags="mytag")
#'
#' # multiple label classification
#' proj2 <- create_classification_project(endp, "mymultilabelproject", multiple_tags=TRUE)
#'
#' add_images(proj2, imgs, tags=list(c("tag1", "tag2")))
#' add_images(proj2, c("catanddog.jpg", "cat.jpg", "dog.jpg"),
#' tags=list(
#' c("cat", "dog"),
#' "cat",
#' "dog"
#' )
#' )
#'
#' # object detection
#' proj3 <- create_object_detection_project(endp, "myobjdetproj")
#'
#' regions <- list(
#' data.frame(
#' tag=c("cat", "dog"),
#' left=c(0.1, 0.5),
#' top=c(0.25, 0.28),
#' width=c(0.24, 0.21),
#' height=c(0.7, 0.6)
#' ),
#' data.frame(
#' tag="cat", left=0.5, top=0.35, width=0.25, height=0.62
#' ),
#' data.frame(
#' tag="dog", left=0.07, top=0.12, width=0.79, height=0.5
#' )
#' )
#' add_images(proj3, c("catanddog.jpg", "cat.jpg", "dog.jpg"), regions=regions)
#'
#' }
#' @aliases customvision_images
#' @rdname customvision_images
#' @export
add_images <- function(project, ...)
{
UseMethod("add_images")
}
#' @rdname customvision_images
#' @export
add_images.classification_project <- function(project, images, tags=NULL, ...)
{
img_ids <- add_images_internal(project, images)
if(!is_empty(tags))
add_image_tags(project, img_ids, tags)
img_ids
}
#' @rdname customvision_images
#' @export
add_images.object_detection_project <- function(project, images, regions=NULL, ...)
{
img_ids <- add_images_internal(project, images)
if(!is_empty(regions))
add_image_regions(project, img_ids, regions)
img_ids
}
add_images_internal <- function(project, images)
{
bodies <- images_to_bodies(images)
src_names <- names(bodies)
op <- if(is.null(bodies[[1]]$contents)) "images/urls" else "images/files"
lst <- list()
while(!is_empty(bodies))
{
idx <- seq_len(min(length(bodies), 64))
res <- do_training_op(project, op, body=list(images=unname(bodies[idx])), http_verb="POST")
if(!res$isBatchSuccessful)
stop("Not all images were successfully added", call.=FALSE)
bodies <- bodies[-idx]
lst <- c(lst, res$images)
}
# need to reorder uploading result to match original image vector
srcs <- sapply(lst, `[[`, "sourceUrl")
lst <- lapply(lst[match(src_names, srcs)], `[[`, "image")
img_ids <- sapply(lst, function(x) x$id)
}
#' @rdname customvision_images
#' @export
list_images <- function(project, include=c("all", "tagged", "untagged"), as=c("ids", "dataframe", "list"),
iteration=NULL)
{
get_paged_list <- function(op)
{
skip <- 0
lst <- list()
repeat
{
opts <- list(iterationId=iteration, take=256, skip=skip)
res <- do_training_op(project, op, options=opts, simplifyVector=simplify)
if(is_empty(res))
break
skip <- skip + 256
lst <- if(as == "ids")
c(lst, res$id)
else c(lst, list(res))
}
switch(as,
ids=unlist(lst),
dataframe=do.call(rbind, lst),
list=lst
)
}
include <- match.arg(include)
as <- match.arg(as)
simplify <- as != "list"
tagged_imgs <- if(include != "untagged") get_paged_list("images/tagged") else NULL
untagged_imgs <- if(include != "tagged") get_paged_list("images/untagged") else NULL
if(as == "ids")
as.character(c(tagged_imgs, untagged_imgs))
else if(as == "dataframe")
{
if(is.data.frame(untagged_imgs) && nrow(untagged_imgs) > 0)
{
untagged_imgs$tags <- NA
if(!is.null(tagged_imgs$regions))
untagged_imgs$regions <- NA
}
rbind.data.frame(tagged_imgs, untagged_imgs)
}
else c(tagged_imgs, untagged_imgs)
}
#' @rdname customvision_images
#' @export
remove_images <- function(project, image_ids=list_images(project, "untagged", as="ids"), confirm=TRUE)
{
if(!confirm_delete("Are you sure you want to remove images from the project?", confirm))
return(invisible(project))
while(!is_empty(image_ids))
{
idx <- seq_len(min(length(image_ids), 256))
image_batch <- paste0(image_ids[idx], collapse=",")
do_training_op(project, "images", options=list(imageIds=image_batch), http_verb="DELETE")
image_ids <- image_ids[-idx]
}
invisible(NULL)
}
#' View images uploaded to a Custom Vision project
#'
#' @param project A Custom Vision project.
#' @param img_ids The IDs of the images to view. You can use [`list_images`] to get the image IDs for this project.
#' @param which Which image to view: the resized version used for training (the default), the original uploaded image, or the thumbnail.
#' @param max_images The maximum number of images to display.
#' @param iteration The iteration ID (roughly, which model generation to use). Defaults to the latest iteration.
#' @details
#' Images in a Custom Vision project are stored in Azure Storage. This function gets the URLs for the uploaded images and displays them in your browser.
#' @seealso
#' [`list_images`]
#' @export
browse_images <- function(project, img_ids, which=c("resized", "original", "thumbnail"), max_images=20,
iteration=NULL)
{
if(length(img_ids) > max_images)
{
warning("Only the first ", max_images, " images displayed", call.=FALSE)
img_ids <- img_ids[seq_len(max_images)]
}
opts <- list(
imageIds=paste0(img_ids, collapse=","),
iterationId=iteration
)
res <- do_training_op(project, "images/id", options=opts, simplifyDataFrame=TRUE)
img_urls <- switch(match.arg(which),
resized=res$resizedImageUri,
original=res$originalImageUri,
thumbnail=res$thumbnailUri
)
lapply(img_urls, httr::BROWSE)
invisible(NULL)
}
# vectorised form of image_to_body, checks that all images are raw/filename/URL
images_to_bodies <- function(images)
{
type <- image_type(images)
if(type == "raw" && is.raw(images))
images <- list(images)
# returned list will be named
names(images) <- if(type == "raw") seq_along(images) else images
switch(type,
raw=mapply(
function(conts, name) list(name=name, contents=conts),
images,
names(images),
SIMPLIFY=FALSE
),
files=mapply(
function(f, size) list(name=f, contents=readBin(f, "raw", size)),
images,
file.size(images),
SIMPLIFY=FALSE
),
urls=lapply(images, function(f) list(url=f))
)
}
image_type <- function(images)
{
if(is.raw(images) || (is.list(images) && all(sapply(images, is.raw))))
return("raw")
if(all(file.exists(images) & !dir.exists(images)))
return("files")
if(all(sapply(images, is_any_uri)))
return("urls")
stop("All image inputs must be of the same type: filenames, URLs or raw vectors", call.=FALSE)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision_imgs.R |
#' Get predictions from a Custom Vision model
#'
#' @param object A Custom Vision object from which to get predictions. See 'Details' below.
#' @param images The images for which to get predictions.
#' @param type The type of prediction: either class membership (the default), the class probabilities, or a list containing all information returned by the prediction endpoint.
#' @param save_result For the predictive service methods, whether to store the predictions on the server for future use.
#' @param ... Further arguments passed to lower-level functions; not used.
#' @details
#' AzureVision defines prediction methods for both Custom Vision model training objects (of class `customvision_model`) and prediction services (`classification_service` and `object_detection_service`). The method for model training objects calls the "quick test" endpoint, and is meant only for testing purposes.
#'
#' The prediction endpoints accept a single image per request, so supplying multiple images to these functions will call the endpoints multiple times, in sequence. The images can be specified as:
#' - A vector of local filenames. All common image file formats are supported.
#' - A vector of publicly accessible URLs.
#' - A raw vector, or a list of raw vectors, holding the binary contents of the image files.
#' @seealso
#' [`train_model`], [`publish_model`], [`classification_service`], [`object_detection_service`]
#' @examples
#' \dontrun{
#'
#' # predicting with the training endpoint
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#' myproj <- get_project(endp, "myproject")
#' mod <- get_model(myproj)
#'
#' predict(mod, "testimage.jpg")
#' predict(mod, "https://mysite.example.com/testimage.jpg", type="prob")
#'
#' imgraw <- readBin("testimage.jpg", "raw", file.size("testimage.jpg"))
#' predict(mod, imgraw, type="list")
#'
#' # predicting with the prediction endpoint
#' # you'll need either the project object or the ID
#' proj_id <- myproj$project$id
#' pred_endp <- customvision_prediction_endpoint(url="endpoint_url", key="pred_key")
#' pred_svc <- classification_service(pred_endp, proj_id, "iteration1")
#' predict(pred_svc, "testimage.jpg")
#'
#' }
#' @aliases predict
#' @rdname customvision_predict
#' @export
predict.customvision_model <- function(object, images, type=c("class", "prob", "list"), ...)
{
type <- match.arg(type)
images <- images_to_bodies(images)
files <- !is.null(images[[1]]$content)
op <- file.path("quicktest", if(files) "image" else "url")
opts <- list(iterationId=object$id)
out <- if(files)
lapply(images, function(f)
do_training_op(object$project, op, options=opts, body=f$content, http_verb="POST", simplifyVector=TRUE))
else lapply(images, function(f)
do_training_op(object$project, op, options=opts, body=f, http_verb="POST", simplifyVector=TRUE))
normalize_predictions(out, type)
}
#' @rdname customvision_predict
#' @export
predict.classification_service <- function(object, images, type=c("class", "prob", "list"), save_result=FALSE, ...)
{
type <- match.arg(type)
customvision_predict_internal(object, images, type, save_result, verb="classify")
}
#' @rdname customvision_predict
#' @export
predict.object_detection_service <- function(object, images, type=c("class", "prob", "list"), save_result=FALSE, ...)
{
type <- match.arg(type)
customvision_predict_internal(object, images, type, save_result, verb="detect")
}
customvision_predict_internal <- function(object, images, type, save_result, verb)
{
images <- images_to_bodies(images)
files <- !is.null(images[[1]]$content)
op <- file.path(verb, "iterations", object$name, if(files) "image" else "url")
if(!save_result)
op <- file.path(op, "nostore")
out <- if(files)
lapply(images, function(f)
do_prediction_op(object, op, body=f$content, http_verb="POST", simplifyVector=TRUE))
else lapply(images, function(f)
do_prediction_op(object, op, body=f, http_verb="POST", simplifyVector=TRUE))
normalize_predictions(out, type)
}
#' Connect to a Custom Vision predictive service
#'
#' @param endpoint A prediction endpoint object, of class `customvision_prediction_endpoint`.
#' @param project The project underlying this predictive service. Can be either an object of class `customvision_project`, or a string giving the ID of the project.
#' @param name The published name of the service.
#' @details
#' These functions are handles to a predictive service that was previously published from a trained model. They have `predict` methods defined for them.
#' @return
#' An object of class `classification_service` or `object_detection_service`, as appropriate. These are subclasses of `customvision_predictive_service`.
#' @seealso
#' [`customvision_prediction_endpoint`], [`customvision_project`]
#'
#' [`predict.classification_service`], [`predict.object_detection_service`], [`do_prediction_op`]
#'
#' [`train_model`], [`publish_model`]
#' @examples
#' \dontrun{
#'
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#' myproj <- get_project(endp, "myproject")
#'
#' # getting the ID from the project object -- in practice you would store the ID separately
#' pred_endp <- customvision_prediction_endpoint(url="endpoint_url", key="pred_key")
#' classification_service(pred_endp, myproj$project$id, "publishedname")
#'
#' }
#' @aliases customvision_predictive_service
#' @rdname customvision_predictive_service
#' @export
classification_service <- function(endpoint, project, name)
{
if(inherits(project, "classification_project"))
project <- project$project$id
else if(!is_guid(project))
stop("Must supply a classification project object or ID", call.=FALSE)
structure(
list(endpoint=endpoint, project=project, name=name),
class=c("classification_service", "customvision_predictive_service")
)
}
#' @rdname customvision_predictive_service
#' @export
object_detection_service <- function(endpoint, project, name)
{
if(inherits(project, "object_detection_project"))
project <- project$project$id
else if(!is_guid(project))
stop("Must supply an object detection project object or ID", call.=FALSE)
structure(
list(endpoint=endpoint, project=project, name=name),
class=c("object_detection_service", "customvision_predictive_service")
)
}
normalize_predictions <- function(lst, type)
{
names(lst) <- NULL
lst <- lapply(lst, `[[`, "predictions")
if(type == "list")
lst
else if(type == "prob")
{
tagnames <- sort(lst[[1]]$tagName)
out <- t(sapply(lst, function(df) df$probability[order(df$tagName)]))
colnames(out) <- tagnames
out
}
else sapply(lst, function(df) df$tagName[which.max(df$probability)])
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision_predict.R |
#' Publish, export and unpublish a Custom Vision model iteration
#'
#' @param model A Custom Vision model iteration object.
#' @param name For `publish_model`, the name to assign to the published model on the prediction endpoint.
#' @param prediction_resource For `publish_model`, the Custom Vision prediction resource to publish to. This can either be a string containing the Azure resource ID, or an AzureRMR resource object.
#' @param format For `export_model`, the format to export to. See below for supported formats.
#' @param destfile For `export_model`, the destination file for downloading. Set this to NULL to skip downloading.
#' @param confirm For `unpublish_model`, whether to ask for confirmation first.
#' @details
#' Publishing a model makes it available to clients as a predictive service. Exporting a model serialises it to a file of the given format in Azure storage, which can then be downloaded. Each iteration of the model can be published or exported separately.
#'
#' The `format` argument to `export_model` can be one of the following. Note that exporting a model requires that the project was created with support for it.
#' - `"onnx"`: ONNX 1.2
#' - `"coreml"`: CoreML, for iOS 11 devices
#' - `"tensorflow"`: TensorFlow
#' - `"tensorflow lite"`: TensorFlow Lite for Android devices
#' - `"linux docker"`, `"windows docker"`, `"arm docker"`: A Docker image for the given platform (Raspberry Pi 3 in the case of ARM)
#' - `"vaidk"`: Vision AI Development Kit
#' @return
#' `export_model` returns the URL of the exported file, invisibly if it was downloaded.
#'
#' `list_model_exports` returns a data frame detailing the formats the current model has been exported to, along with their download URLs.
#' @seealso
#' [`train_model`], [`get_model`], [`customvision_predictive_service`], [`predict.classification_service`], [`predict.object_detection_service`]
#' @examples
#' \dontrun{
#'
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#' myproj <- get_project(endp, "myproject")
#' mod <- get_model(myproj)
#'
#' export_model(mod, "tensorflow", download=FALSE)
#' export_model(mod, "onnx", destfile="onnx.zip")
#'
#' rg <- AzureRMR::get_azure_login("yourtenant")$
#' get_subscription("sub_id")$
#' get_resource_group("rgname")
#'
#' pred_res <- rg$get_cognitive_service("mycustvis_prediction")
#' publish_model(mod, "mypublishedmod", pred_res)
#'
#' unpublish_model(mod)
#'
#' }
#' @rdname customvision_publish
#' @export
publish_model <- function(model, name, prediction_resource)
{
if(is_resource(prediction_resource))
prediction_resource <- prediction_resource$id
op <- file.path("iterations", model$id, "publish")
options <- list(publishName=name, predictionId=prediction_resource)
do_training_op(model$project, op, options=options, http_verb="POST")
invisible(NULL)
}
#' @rdname customvision_publish
#' @export
unpublish_model <- function(model, confirm=TRUE)
{
if(!confirm_delete("Are you sure you want to unpublish the model?", confirm))
return(invisible(NULL))
op <- file.path("iterations", model$id, "publish")
do_training_op(model$project, op, http_verb="DELETE")
invisible(NULL)
}
#' @rdname customvision_publish
#' @export
export_model <- function(model, format, destfile=basename(httr::parse_url(dl_link)$path))
{
settings <- model$project$project$settings
if(!is_compact_domain(settings$domainId))
stop("Project was not created with support for exporting", call.=FALSE)
plat <- get_export_platform(format)
if(plat$platform == "VAIDK" && is_empty(settings$targetExportPlatforms))
stop("Project does not support exporting to Vision AI Dev Kit format", call.=FALSE)
# if already exported, don't export again
exports <- list_model_exports(model)
this_exp <- find_model_export(plat, exports)
if(is_empty(this_exp))
{
op <- file.path("iterations", model$id, "export")
res <- do_training_op(model$project, op, options=plat, http_verb="POST")
# wait for it to appear in the list of exports
for(i in 1:500)
{
exports <- list_model_exports(model)
this_exp <- find_model_export(plat, exports)
if(is_empty(this_exp))
stop("Exported model not found", call.=FALSE)
status <- exports$status[this_exp]
if(status %in% c("Done", "Failed"))
break
Sys.sleep(5)
}
if(status != "Done")
stop("Unable to export model", call.=FALSE)
}
dl_link <- exports$downloadUri[this_exp]
if(!is.null(destfile))
{
message("Downloading to ", destfile)
utils::download.file(dl_link, destfile)
invisible(dl_link)
}
else dl_link
}
#' @rdname customvision_publish
#' @export
list_model_exports <- function(model)
{
op <- file.path("iterations", model$id, "export")
do_training_op(model$project, op, simplifyVector=TRUE)
}
as_datetime <- function(x, format="%Y-%m-%dT%H:%M:%S", tz="UTC")
{
as.POSIXct(x, format=format, tz=tz)
}
make_model_iteration <- function(iteration, project)
{
structure(
list(project=project, id=iteration$id, name=iteration$name),
class="customvision_model"
)
}
get_export_platform <- function(format)
{
switch(tolower(format),
"coreml"=list(platform="CoreML"),
"arm docker"=list(platform="DockerFile", flavor="ARM"),
"linux docker"=list(platform="DockerFile", flavor="Linux"),
"windows docker"=list(platform="DockerFile", flavor="Windows"),
"onnx"=list(platform="ONNX"),
"tensorflow"=list(platform="TensorFlow", flavor="TensorFlowNormal"),
"tensorflow lite"=list(platform="TensorFlow", flavor="TensorFlowLite"),
"vaidk"=list(platform="VAIDK"),
stop("Unrecognised export format '", format, "'", call.=FALSE)
)
}
find_model_export <- function(platform, exports)
{
this_plat <- exports$platform == platform$platform
this_flav <- if(!is.null(platform$flavor))
exports$flavor == platform$flavor
else TRUE
which(this_plat & this_flav)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision_publish.R |
#' Add and remove regions from images
#'
#' @param project A Custom Vision object detection project.
#' @param image_ids For `add_image_regions` and `remove_image_regions`, the IDs of the images for which to add or remove regions.
#' @param image For `identify_regions`, an image for which to identify possible regions in which an object exists. This can be the ID of an image that was previously uploaded to the project; if not, the image is uploaded. Otherwise, see `add_images` for how to specify an image to upload.
#' @param regions For `add_image_regions`, the regions to add. See 'Details' below.
#' @param region_ids For `remove_image_regions`, a vector of region IDs. This is an alternative to image ID for specifying the regions to remove; if this is provided, `image_ids` is not used.
#' @details
#' `add_image_regions` and `remove_image_regions` let you specify the regions in an image that contain an object. You can use `identify_regions` to have Custom Vision try to guess the regions for an image.
#'
#' The regions to add should be specified as a list of data frames, with one data frame per image. Each data frame should have one row per region, and the following columns:
#' - `left`, `top`, `width`, `height`: the location and dimensions of the region bounding box, normalised to be between 0 and 1.
#' - `tag`: the name of the tag to associate with the region.
#' Any other columns in the data frame will be ignored.
#'
#' @return
#' For `add_image_regions`, a data frame containing the details on the added regions.
#'
#' For `remove_image_regions`, the value of `image_ids` invisibly, if this argument was provided; NULL otherwise.
#'
#' For `identify_regions`, a list with the following components: `projectId`, the ID of the project; `imageId`, the ID of the image; and `proposals`, a data frame containing the coordinates of each identified region along with a confidence score.
#' @seealso
#' [`add_images`], [`add_tags`]
#'
#' [`add_image_tags`] for classification projects
#' @examples
#' \dontrun{
#'
#' img_ids <- add_images(myproj, c("catanddog.jpg", "cat.jpg", "dog.jpg"))
#'
#' regions <- list(
#' data.frame(
#' tag=c("cat", "dog"),
#' left=c(0.1, 0.5),
#' top=c(0.25, 0.28),
#' width=c(0.24, 0.21),
#' height=c(0.7, 0.6)
#' ),
#' data.frame(
#' tag="cat", left=0.5, top=0.35, width=0.25, height=0.62
#' ),
#' data.frame(
#' tag="dog", left=0.07, top=0.12, width=0.79, height=0.5
#' )
#' )
#'
#' add_image_regions(myproj, img_ids, regions)
#' remove_image_regions(myproj, img_ids[3])
#' add_image_regions(myproj, img_ids[3],
#' list(data.frame(
#' tag="dog", left=0.5, top=0.12, width=0.4, height=0.7
#' ))
#' )
#'
#' }
#' @aliases customvision_regions
#' @rdname customvision_regions
#' @export
add_image_regions <- function(project, image_ids, regions)
{
UseMethod("add_image_regions")
}
add_image_regions.object_detection_project <- function(project, image_ids, regions)
{
if(!all(sapply(image_ids, is_guid)))
stop("Must provide GUIDs of images to add regions to", call.=FALSE)
tagdf <- list_tags(project, as="dataframe")
region_tags <- unique_tags(lapply(regions, `[[`, "tag"))
if(!all(region_tags %in% tagdf$name))
tagdf <- rbind(tagdf, add_tags(project, setdiff(region_tags, tagdf$name)))
tag_ids <- tagdf$id
names(tag_ids) <- tagdf$name
regions <- mapply(
function(region_df, img)
{
region_df$tagId <- tag_ids[region_df$tag]
region_df$imageId <- img
region_df[c("imageId", "tagId", "left", "top", "width", "height")]
},
regions, image_ids, SIMPLIFY=FALSE
)
regions <- do.call(rbind, regions)
lst <- list()
while(nrow(regions) > 0)
{
idx <- seq_len(min(nrow(regions), 64))
body <- list(regions=regions[idx, ])
res <- do_training_op(project, "images/regions", body=body, http_verb="POST", simplifyDataFrame=TRUE)$created
lst <- c(lst, list(res))
regions <- regions[-idx, ]
}
do.call(rbind, lst)
}
#' @rdname customvision_regions
#' @export
remove_image_regions <- function(project, image_ids, region_ids=NULL)
{
if(!missing(image_ids) && !all(sapply(image_ids, is_guid)))
stop("Must provide GUIDs of images to remove regions from", call.=FALSE)
if(is_empty(region_ids))
{
region_dflst <- subset(list_images(project, "tagged", as="dataframe"), id %in% image_ids)$regions
region_ids <- do.call(rbind, region_dflst)$regionId
}
while(!is_empty(region_ids))
{
idx <- seq_len(min(length(region_ids), 64))
opts <- list(regionIds=paste0(region_ids[idx], collapse=","))
do_training_op(project, "images/regions", options=opts, http_verb="DELETE")
region_ids <- region_ids[-idx]
}
if(missing(image_ids))
invisible(NULL)
else invisible(image_ids)
}
#' @rdname customvision_regions
#' @export
identify_regions <- function(project, image)
{
image_id <- if(is.character(image) && is_guid(image))
image
else add_images(project, image)
res <- do_training_op(project, file.path("images", image_id, "regionproposals"), http_verb="POST",
simplifyDataFrame=TRUE, flatten=TRUE)
names(res$proposals)[-1] <- c("left", "top", "width", "height")
res
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision_regions.R |
#' Add, retrieve and remove tags for a project
#'
#' @param project A Custom Vision project.
#' @param tags For `add_tags`, a vector of strings to treat as tags.
#' @param name,id For `get_tag`, the name (text string) for a tag, and its ID. Provide one or the other, but not both.
#' @param negative_name For `add_negative_tag`, the label to provide a negative tag. See 'Negative tags' below.
#' @param as For `list_tags`, the format in which to return results: a vector of tag names, a vector of tag IDs, a data frame of metadata, or a list of metadata.
#' @param iteration For `list_tags` and `get_tag`, the iteration ID (roughly, which model generation to use). Defaults to the latest iteration.
#' @param confirm For `remove_tags`, whether to ask for confirmation first.
#' @details
#' _Tags_ are the labels attached to images for use in classification projects. An image can have one or multiple tags associated with it; however, the latter only makes sense if the project is setup for multi-label classification.
#'
#' Tags form part of the metadata for a Custom Vision project, and have to be explicitly defined prior to use. Each tag has a corresponding ID which is used to manage it. In general, you can let AzureVision handle the details of managing tags and tag IDs.
#'
#' @section Negative tags:
#' A _negative tag_ is a special tag that represents the absence of any other tag. For example, if a project is classifying images into cats and dogs, an image that doesn't contain either a cat or dog should be given a negative tag. This can be distinguished from an _untagged_ image, where there is no information at all on what it contains.
#'
#' You can add a negative tag to a project with the `add_negative_tag` method. Once defined, a negative tag is treated like any other tag. A project can only have one negative tag defined.
#' @return
#' `add_tags` and `add_negative_tag` return a data frame containing the names and IDs of the tags added.
#' @seealso
#' [`add_image_tags`], [`remove_image_tags`]
#' @examples
#' \dontrun{
#'
#' add_tags(myproj, "newtag")
#' add_negative_tag(myproj)
#' remove_tags(myproj, "_negative_")
#' add_negative_tag(myproj, "nothing")
#'
#' }
#' @rdname customvision_tags
#' @aliases customvision_tags
#' @export
add_tags <- function(project, tags)
{
current_tags <- list_tags(project, as="names")
tags <- unique_tags(tags)
newtags <- setdiff(tags, current_tags)
if(is_empty(newtags))
return(NULL)
res <- lapply(newtags, function(tag)
do_training_op(project, "tags", options=list(name=tag), http_verb="POST"))
do.call(rbind.data.frame, c(stringsAsFactors=FALSE,
lapply(res, function(x) x[c("name", "id")]))
)
}
#' @rdname customvision_tags
#' @export
add_negative_tag <- function(project, negative_name="_negative_")
{
taglist <- list_tags(project, as="dataframe")
if(any(taglist$type == "Negative"))
{
warning("Project already has a negative tag", call.=FALSE)
return(NULL)
}
res <- if(negative_name %in% taglist$name)
{
tagid <- taglist$id[which(negative_name == taglist$name)]
do_training_op(project, file.path("tags", tagid), body=list(name=negative_name, type="Negative"), http_verb="PATCH")
}
else do_training_op(project, "tags", options=list(name=negative_name, type="Negative"), http_verb="POST")
data.frame(res[c("name", "id")], stringsAsFactors=FALSE)
}
#' @rdname customvision_tags
#' @export
list_tags <- function(project, as=c("names", "ids", "dataframe", "list"), iteration=NULL)
{
as <- match.arg(as)
if(as == "list")
return(do_training_op(project, "tags", options(iterationId=iteration)))
tags <- do_training_op(project, "tags", options(iterationId=iteration), simplifyVector=TRUE)
if(as == "names")
tags$name
else if(as == "ids")
tags$id
else as.data.frame(tags)
}
#' @rdname customvision_tags
#' @export
get_tag <- function(project, name=NULL, id=NULL, iteration=NULL)
{
if(is.null(id))
{
taglist <- list_tags(project, iteration=iteration, as="list")
tagnames <- sapply(taglist, `[[`, "name")
i <- which(name == tagnames)
if(is_empty(i))
stop(sprintf("Image tag '%s' not found", name), call.=FALSE)
taglist[[i]]
}
else do_training_op(project, file.path("tags", id), options(iterationid=iteration))
}
#' @rdname customvision_tags
#' @export
remove_tags <- function(project, tags, confirm=TRUE)
{
tags <- unique_tags(tags)
if(!confirm_delete("Are you sure you want to remove tags from the project?", confirm))
return(invisible(project))
lapply(get_tag_ids_from_names(tags, project), function(tag)
do_training_op(project, file.path("tags", tag), http_verb="DELETE"))
invisible(NULL)
}
#' Tag and untag images uploaded to a project
#'
#' @param project a Custom Vision classification project.
#' @param image_ids The IDs of the images to tag or untag.
#' @param tags For `add_image_tags`, the tag labels to add to the images. For `remove_image_tags`, the tags (either text labels or IDs) to remove from images. The default for untagging is to remove all assigned tags.
#' @details
#' `add_image_tags` is for tagging images that were uploaded previously, while `remove_image_tags` untags them. Adding tags does not remove previously assigned ones. Similarly, removing one tag from an image leaves any other tags intact.
#'
#' Tags can be specified in the following ways:
#' - For a regular classification project (one tag per image), as a vector of strings. The tags will be applied to the images in order. If the length of the vector is 1, it will be recycled to the length of `image_ids`.
#' - For a multilabel classification project (multiple tags per image), as a _list_ of vectors of strings. Each vector in the list contains the tags to be assigned to the corresponding image. If the length of the list is 1, it will be recycled to the length of `image_ids`.
#'
#' If the length of the vector is 1, it will be recycled to the length of `image_ids`.
#' @return
#' The vector of IDs for the images affected, invisibly.
#' @seealso
#' [`add_images`], [`add_tags`]
#'
#' [`add_image_regions`] for object detection projects
#' @examples
#' \dontrun{
#'
#' imgs <- dir("path/to/images", full.names=TRUE)
#' img_ids <- add_images(myproj, imgs)
#' add_image_tags(myproj, "mytag")
#' remove_image_tags(myproj, img_ids[1])
#' add_image_tags(myproj, img_ids[1], "myothertag")
#'
#' }
#' @aliases customvision_image_tags
#' @rdname customvision_image_tags
#' @export
add_image_tags <- function(project, image_ids, tags)
{
UseMethod("add_image_tags")
}
#' @rdname customvision_image_tags
#' @export
add_image_tags.classification_project <- function(project, image_ids=list_images(project, "untagged"), tags)
{
if(length(tags) != length(image_ids) && length(tags) != 1)
stop("Must supply tags for each image", call.=FALSE)
if(!all(sapply(image_ids, is_guid)))
stop("Must provide GUIDs of images to be tagged", call.=FALSE)
unique_tags <- unique_tags(tags)
add_tags(project, unique_tags)
tag_ids <- get_tag_ids_from_names(unique_tags, project)
if(length(tags) == 1)
tags <- rep(tags, length(image_ids))
req_list <- lapply(seq_along(unique_tags), function(i)
{
this_set <- sapply(tags, function(tags_j) unique_tags[i] %in% tags_j)
if(!is_empty(this_set))
data.frame(imageId=image_ids[this_set], tagId=tag_ids[i], stringsAsFactors=FALSE)
else NULL
})
# do by pages, to be safe
while(!is_empty(req_list))
{
idx <- seq_len(min(length(req_list), 64))
do_training_op(project, "images/tags", body=list(tags=do.call(rbind, req_list[idx])), http_verb="POST")
req_list <- req_list[-idx]
}
invisible(image_ids)
}
#' @rdname customvision_image_tags
#' @export
remove_image_tags <- function(project, image_ids=list_images(project, "tagged", as="ids"),
tags=list_tags(project, as="ids"))
{
if(!all(sapply(image_ids, is_guid)))
stop("Must provide GUIDs of images to be untagged", call.=FALSE)
if(!all(sapply(tags, is_guid)))
{
tagdf <- list_tags(project, as="dataframe")[c("name", "id")]
tags <- tagdf$id[match(tags, tagdf$name)]
}
tmp_imgs <- image_ids
while(!is_empty(tmp_imgs))
{
idx <- seq_len(min(length(tmp_imgs), 64))
opts <- list(
imageIds=paste0(tmp_imgs[idx], collapse=","),
tagIds=paste0(tags, collapse=",")
)
do_training_op(project, "images/tags", options=opts, http_verb="DELETE")
tmp_imgs <- tmp_imgs[-idx]
}
invisible(image_ids)
}
get_tag_ids_from_names <- function(tagnames, project)
{
tagdf <- list_tags(project, as="dataframe")
unname(structure(tagdf$id, names=tagdf$name)[tagnames])
}
unique_tags <- function(tags)
{
if(is.list(tags)) unique(unlist(tags)) else unique(tags)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision_tags.R |
#' @export
print.customvision_model <- function(x, ...)
{
cat("Azure Custom Vision model\n")
cat(" Project/iteration: ", x$project$project$name, "/", x$name, " (", x$id, ")", "\n", sep="")
invisible(x)
}
#' Create, retrieve, rename and delete a model iteration
#'
#' @param project A Custom Vision project.
#' @param model A Custom Vision model.
#' @param object For the `delete_model` method, a Custom Vision project or model, as appropriate.
#' @param training_method The training method to use. The default "quick" is faster but may be less accurate. The "advanced" method is slower but produces better results.
#' @param max_time For advanced training, the maximum training time in hours.
#' @param force For advanced training, whether to refit the model even if the data has not changed since the last iteration.
#' @param email For advanced training, an email address to notify when the training is complete.
#' @param wait whether to wait until training is complete (or the maximum training time has elapsed) before returning.
#' @param iteration For `get_model` and `delete_model.customvision_project`, either the iteration name or ID.
#' @param name For `rename_model`, the new name for the model.
#' @param as For `list_models`, the format in which to return results: as a named vector of model iteration IDs, or a list of model objects.
#' @param confirm For the `delete_model` methods, whether to ask for confirmation first.
#' @param ... Arguments passed to lower-level functions.
#' @details
#' Training a Custom Vision model results in a _model iteration_. Each iteration is based on the current set of images uploaded to the endpoint. Successive model iterations trained on different image sets do not overwrite previous ones.
#'
#' You must have at least 5 images per tag for a classification project, and 15 images per tag for an object detection project, before you can train a model.
#'
#' By default, AzureVision will use the latest model iteration for actions such as prediction, showing performance statistics, and so on. You can list the model iterations with `list_models`, and retrieve a specific iteration by passing the iteration ID to `get_model`.
#' @return
#' For `train_model`, `get_model` and `rename_model`, an object of class `customvision_model` which is a handle to the iteration.
#'
#' For `list_models`, based on the `as` argument: `as="ids"` returns a named vector of model iteration IDs, while `as="list"` returns a list of model objects.
#' @seealso
#' [`show_model`], [`show_training_performance`], [`publish_model`]
#' @examples
#' \dontrun{
#'
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#' myproj <- get_project(endp, "myproject")
#'
#' train_model(myproj)
#' train_model(myproj, method="advanced", force=TRUE, email="[email protected]")
#'
#' list_models(myproj)
#'
#' mod <- get_model(myproj)
#' rename(mod, "mymodel")
#' mod <- get_model(myproj, "mymodel")
#'
#' delete_model(mod)
#'
#' }
#' @rdname customvision_train
#' @export
train_model <- function(project, training_method=c("quick", "advanced"), max_time=1, force=FALSE, email=NULL,
wait=(training_method == "quick"))
{
training_method <- match.arg(training_method)
opts <- if(training_method == "advanced")
list(
trainingType="advanced",
reservedBudgetInHours=max_time,
forceTrain=force,
notificationEmailAddress=email
)
else list()
res <- do_training_op(project, "train", options=opts, http_verb="POST")
if(wait)
{
message("Waiting for training to complete")
interval <- 10
for(i in 1:(max_time * (3600/interval)))
{
message(".", appendLF=FALSE)
Sys.sleep(interval)
res <- do_training_op(project, file.path("iterations", res$id))
if(res$status != "Training")
break
}
message("\n")
if(!(res$status %in% c("Training", "Completed")))
stop("Unable to train model, final status '", res$status, "'", call.=FALSE)
if(res$status == "Training")
warning("Training not yet completed")
}
make_model_iteration(res, project)
}
#' @rdname customvision_train
#' @export
list_models <- function(project, as=c("ids", "list"))
{
as <- match.arg(as)
res <- do_training_op(project, "iterations")
times <- sapply(res, `[[`, "lastModified")
names(res) <- sapply(res, `[[`, "name")
if(as == "ids")
sapply(res[order(times, decreasing=TRUE)], `[[`, "id")
else lapply(res[order(times, decreasing=TRUE)], make_model_iteration, project=project)
}
#' @rdname customvision_train
#' @export
get_model <- function(project, iteration=NULL)
{
iteration <- find_model_iteration(iteration, project)
res <- do_training_op(project, file.path("iterations", iteration))
make_model_iteration(res, project)
}
#' @rdname customvision_train
#' @export
rename_model <- function(model, name, ...)
{
res <- do_training_op(model$project, file.path("iterations", model$id), body=list(name=name), http_verb="PATCH")
make_model_iteration(res, model$project)
}
#' @rdname customvision_train
#' @export
delete_model <- function(object, ...)
{
UseMethod("delete_model")
}
#' @rdname customvision_train
#' @export
delete_model.customvision_project <- function(object, iteration=NULL, confirm=TRUE, ...)
{
if(!confirm_delete("Are you sure you want to delete this model iteration?", confirm))
return(invisible(NULL))
iteration <- find_model_iteration(iteration, object)
do_training_op(object, file.path("iterations", iteration), http_verb="DELETE")
invisible(NULL)
}
#' @rdname customvision_train
#' @export
delete_model.customvision_model <- function(object, confirm=TRUE, ...)
{
if(!confirm_delete("Are you sure you want to delete this model iteration?", confirm))
return(invisible(NULL))
do_training_op(object$project, file.path("iterations", object$id), http_verb="DELETE")
invisible(NULL)
}
#' Display model iteration details
#'
#' @param model,object A Custom Vision model iteration object.
#' @param threshold For a classification model, the probability threshold to assign an image to a class.
#' @param overlap For an object detection model, the overlap threshold for distinguishing between overlapping objects.
#' @param ... Arguments passed to lower-level functions.
#' @details
#' `show_model` displays the metadata for a model iteration: the name (assigned by default), model training status, publishing details, and so on. `show_training_performance` displays summary statistics for the model's performance on the training data. The `summary` method for Custom Vision model objects simply calls `show_training_performance`.
#'
#' @return
#' For `show_model`, a list containing the metadata for the model iteration. For `show_training_performance` and `summary.customvision_model`, a list of performance diagnostics.
#' @seealso
#' [`train_model`]
#' @examples
#' \dontrun{
#'
#' endp <- customvision_training_endpoint(url="endpoint_url", key="key")
#' myproj <- get_project(endp, "myproject")
#' mod <- get_model(myproj)
#'
#' show_model(mod)
#'
#' show_training_performance(mod)
#' summary(mod)
#'
#' }
#' @rdname customvision_train_result
#' @export
show_model <- function(model)
{
res <- do_training_op(model$project, file.path("iterations", model$id),
simplifyVector=TRUE, simplifyDataFrame=FALSE)
res$created <- as_datetime(res$created)
res$lastModified <- as_datetime(res$lastModified)
res$trainedAt <- as_datetime(res$trainedAt)
res
}
#' @rdname customvision_train_result
#' @export
show_training_performance <- function(model, threshold=0.5, overlap=NULL)
{
op <- file.path("iterations", model$id, "performance")
do_training_op(model$project, op, options=list(threshold=threshold, overlapThreshold=overlap),
simplifyVector=TRUE)
}
#' @rdname customvision_train_result
#' @export
summary.customvision_model <- function(object, ...)
{
show_training_performance(object, ...)
}
find_model_iteration <- function(iteration=NULL, project)
{
iters <- list_models(project)
if(is.null(iteration))
return(iters[1])
if(is_guid(iteration))
{
if(!(iteration %in% iters))
stop("Invalid model iteration ID", call.=FALSE)
return(iteration)
}
else
{
if(!(iteration %in% names(iters)))
stop("Invalid model iteration name", call.=FALSE)
return(iters[iteration])
}
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/customvision_training.R |
#' Endpoint objects for computer vision services
#'
#' @param url The URL of the endpoint.
#' @param key A subscription key. Can be single-service or multi-service.
#' @param aad_token For the Computer Vision endpoint, an OAuth token object, of class [`AzureAuth::AzureToken`]. You can supply this as an alternative to a subscription key.
#' @param ... Other arguments to pass to [`AzureCognitive::cognitive_endpoint`].
#' @details
#' These are functions to create service-specific endpoint objects. Computer Vision supports authentication via either a subscription key or Azure Active Directory (AAD) token; Custom Vision only supports subscription key. Note that there are _two_ kinds of Custom Vision endpoint, one for training and the other for prediction.
#' @return
#' An object inheriting from `cognitive_endpoint`. The subclass indicates the type of service/endpoint: Computer Vision, Custom Vision training, or Custom Vision prediction.
#' @seealso
#' [`cognitive_endpoint`], [`call_cognitive_endpoint`]
#' @rdname endpoint
#' @examples
#'
#' computervision_endpoint("https://myaccount.cognitiveservices.azure.com", key="key")
#'
#' customvision_training_endpoint("https://westus.api.cognitive.microsoft.com", key="key")
#'
#' customvision_prediction_endpoint("https://westus.api.cognitive.microsoft.com", key="key")
#'
#' @export
computervision_endpoint <- function(url, key=NULL, aad_token=NULL, ...)
{
endp <- cognitive_endpoint(url, service_type="ComputerVision", key=key, aad_token=aad_token, ...)
endp$url$path <- file.path("vision", getOption("azure_computervision_api_version"))
endp
}
#' @rdname endpoint
#' @export
customvision_training_endpoint <- function(url, key=NULL, ...)
{
endp <- cognitive_endpoint(url, service_type="CustomVision.Training", key=key, ..., auth_header="training-key")
endp$url$path <- file.path("customvision", getOption("azure_customvision_training_api_version"))
endp
}
#' @rdname endpoint
#' @export
customvision_prediction_endpoint <- function(url, key=NULL, ...)
{
endp <- cognitive_endpoint(url, service_type="CustomVision.Prediction", key=key, ..., auth_header="prediction-key")
endp$url$path <- file.path("customvision", getOption("azure_customvision_prediction_api_version"))
endp
}
#' Carry out a Custom Vision operation
#'
#' @param project For `do_training_op`, a Custom Vision project.
#' @param service For `do_prediction_op`, a Custom Vision predictive service.
#' @param op,... Further arguments passed to `call_cognitive_endpoint`, and ultimately to the REST API.
#' @details
#' These functions provide low-level access to the Custom Vision REST API. `do_training_op` is for working with the training endpoint, and `do_prediction_op` with the prediction endpoint. You can use them if the other tools in this package don't provide what you need.
#' @seealso
#' [`customvision_training_endpoint`], [`customvision_prediction_endpoint`],
#' [`customvision_project`], [`customvision_predictive_service`], [`call_cognitive_endpoint`]
#' @rdname do_customvision_op
#' @export
do_training_op <- function(project, ...)
{
UseMethod("do_training_op")
}
#' @rdname do_customvision_op
#' @export
do_training_op.customvision_project <- function(project, op, ...)
{
op <- file.path("training/projects", project$project$id, op)
call_cognitive_endpoint(project$endpoint, op, ...)
}
#' @rdname do_customvision_op
#' @export
do_prediction_op <- function(service, ...)
{
UseMethod("do_prediction_op")
}
#' @rdname do_customvision_op
#' @export
do_prediction_op.customvision_predictive_service <- function(service, op, ...)
{
op <- file.path("prediction", service$project, op)
call_cognitive_endpoint(service$endpoint, op, ...)
}
is_any_uri <- function(string)
{
uri <- httr::parse_url(string)
!is.null(uri$scheme) && !is.null(uri$hostname)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/endpoint.R |
#' Interface to Azure Computer Vision API
#'
#' @param endpoint A computer vision endpoint.
#' @param image An image to be sent to the endpoint. This can be either a filename, a publicly accessible URL, or a raw vector holding the file contents.
#' @param domain For `analyze`, an optional domain-specific model to use to analyze the image. Can be "celebrities" or "landmarks".
#' @param feature_types For `analyze`, an optional character vector of more detailed features to return. This can be one or more of: "categories", "tags", "description", "faces", "imagetype", "color", "adult", "brands" and "objects". If not supplied, defaults to "categories".
#' @param language A 2-character code indicating the language to use for tags, feature labels and descriptions. The default is `en`, for English.
#' @param detect_orientation For `read_text`, whether to automatically determine the image's orientation.
#' @param width,height For `make_thumbnail`, the dimensions for the returned thumbnail.
#' @param smart_crop For `make_thumbnail`, whether to automatically determine the best location to crop for the thumbnail. Useful when the aspect ratios of the original image and the thumbnail don't match.
#' @param outfile For `make_thumbnail`, the filename for the generated thumbnail. Alternatively, if this is NULL the thumbnail is returned as a raw vector.
#' @param ... Arguments passed to lower-level functions, and ultimately to `call_cognitive_endpoint`.
#' @details
#' `analyze` extracts visual features from the image. To obtain more detailed features, specify the `domain` and/or `feature_types` arguments as appropriate.
#'
#' `describe` attempts to provide a text description of the image.
#'
#' `detect_objects` detects objects in the image.
#'
#' `area_of_interest` attempts to find the "interesting" part of an image, meaning the most likely location of the image's subject.
#'
#' `tag` returns a set of words that are relevant to the content of the image. Not to be confused with the [`add_tags`] or [`add_image_tags`] functions that are part of the Custom Vision API.
#'
#' `categorize` attempts to place the image into a list of predefined categories.
#'
#' `read_text` performs optical character recognition (OCR) on the image.
#'
#' `list_domains` returns the predefined domain-specific models that can be queried by `analyze` for deeper analysis. Not to be confused with the domains available for training models with the Custom Vision API.
#'
#' `make_thumbnail` generates a thumbnail of the image, with the specified dimensions.
#' @return
#' `analyze` returns a list containing the results of the analysis. The components will vary depending on the domain and feature types requested.
#'
#' `describe` returns a list with two components: `tags`, a vector of text labels; and `captions`, a data frame of descriptive sentences.
#'
#' `detect_objects` returns a dataframe giving the locations and types of the detected objects.
#'
#' `area_of_interest` returns a length-4 numeric vector, containing the top-left coordinates of the area of interest and its width and height.
#'
#' `tag` and `categorize` return a data frame of tag and category information, respectively.
#'
#' `read_text` returns the extracted text as a list with one component per region that contains text. Each component is a vector of character strings.
#'
#' `list_computervision_domains` returns a character vector of domain names.
#'
#' `make_thumbnail` returns a raw vector holding the contents of the thumbnail, if the `outfile` argument is NULL. Otherwise, the thumbnail is saved into `outfile`.
#'
#' @seealso
#' [`computervision_endpoint`], [`AzureCognitive::call_cognitive_endpoint`]
#'
#' [Computer Vision documentation](https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/Home)
#' @examples
#' \dontrun{
#'
#' vis <- computervision_endpoint(
#' url="https://accountname.cognitiveservices.azure.com/",
#' key="account_key"
#' )
#'
#' list_domains(vis)
#'
#' # analyze a local file
#' analyze(vis, "image.jpg")
#' # picture on the Internet
#' analyze(vis, "https://example.com/image.jpg")
#' # as a raw vector
#' analyze(vis, readBin("image.jpg", "raw", file.size("image.jpg")))
#'
#' # analyze has optional extras
#' analyze(vis, "image.jpg", feature_types=c("faces", "objects"))
#'
#' describe(vis, "image.jpg")
#' detect_objects(vis, "image.jpg")
#' area_of_interest(vis, "image.jpg")
#' tag(vis, "image.jpg") # more reliable than analyze(*, feature_types="tags")
#' categorize(vis, "image.jpg")
#' read_text(vis, "scanned_text.jpg")
#'
#' }
#' @aliases computervision
#' @rdname computervision
#' @export
analyze <- function(endpoint, image, domain=NULL, feature_types=NULL, language="en", ...)
{
body <- image_to_body(image)
if(!is_empty(feature_types))
feature_types <- paste0(feature_types, collapse=",")
options <- list(details=domain, language=language, visualFeatures=feature_types)
res <- call_cognitive_endpoint(endpoint, "analyze", body=body, options=options, ..., http_verb="POST",
simplifyVector=TRUE)
res[!(names(res) %in% c("requestId", "metadata"))]
}
#' @rdname computervision
#' @export
describe <- function(endpoint, image, language="en", ...)
{
body <- image_to_body(image)
options <- list(language=language)
res <- call_cognitive_endpoint(endpoint, "describe", body=body, options=options, ..., http_verb="POST",
simplifyVector=TRUE)
res$description
}
#' @rdname computervision
#' @export
detect_objects <- function(endpoint, image, ...)
{
body <- image_to_body(image)
res <- call_cognitive_endpoint(endpoint, "detect", body=body, ..., http_verb="POST", simplifyVector=TRUE)
res$objects
}
#' @rdname computervision
#' @export
area_of_interest <- function(endpoint, image, ...)
{
body <- image_to_body(image)
res <- call_cognitive_endpoint(endpoint, "areaOfInterest", body=body, ..., http_verb="POST", simplifyVector=TRUE)
unlist(res$areaOfInterest)
}
#' @rdname computervision
#' @export
tag <- function(endpoint, image, language="en", ...)
{
body <- image_to_body(image)
res <- call_cognitive_endpoint(endpoint, "tag", body=body, options=list(language=language), ..., http_verb="POST")
# hint not always present, so need to construct data frame manually
do.call(rbind.data.frame, c(lapply(res$tags, function(x)
{
if(is.null(x$hint))
x$hint <- NA_character_
x
}), stringsAsFactors=FALSE, make.row.names=FALSE))
}
#' @rdname computervision
#' @export
categorize <- function(endpoint, image, ...)
{
body <- image_to_body(image)
res <- call_cognitive_endpoint(endpoint, "analyze", body=body, options=list(visualFeatures="categories"), ...,
http_verb="POST", simplifyVector=TRUE)
res$categories
}
#' @rdname computervision
#' @export
read_text <- function(endpoint, image, detect_orientation=TRUE, language="en", ...)
{
body <- image_to_body(image)
res <- call_cognitive_endpoint(endpoint, "ocr", body=body,
options=list(detectOrientation=detect_orientation, language=language), ...,
http_verb="POST")
lapply(res$regions, function(region)
{
sapply(region$lines, function(line)
{
w <- sapply(line$words, `[[`, "text")
paste(w, collapse=" ")
})
})
}
#' @rdname computervision
#' @export
list_computervision_domains <- function(endpoint, ...)
{
res <- call_cognitive_endpoint(endpoint, "models", ..., simplifyVector=TRUE)
res$models$name
}
#' @rdname computervision
#' @export
make_thumbnail <- function(endpoint, image, outfile, width=50, height=50, smart_crop=TRUE, ...)
{
body <- image_to_body(image)
res <- call_cognitive_endpoint(endpoint, "generateThumbnail", body=body,
options=list(width=width, height=height, smartCropping=smart_crop), ...,
http_verb="POST")
if(!is.null(outfile))
writeBin(res, outfile)
else res
}
image_to_body <- function(image)
{
if(is.raw(image))
image
else if(file.exists(image))
readBin(image, "raw", file.size(image))
else if(is_any_uri(image))
list(url=image)
else stop("Could not find image", call.=FALSE)
}
| /scratch/gouwar.j/cran-all/cranData/AzureVision/R/vision.R |
---
title: "Using the Computer Vision service"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Computer Vision}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
The Computer Vision service provides developers with access to advanced algorithms that process images and return information, depending on the visual features you're interested in. For example, Computer Vision can determine if an image contains adult content, or it can find all of the human faces in an image.
## Creating the resources
You can create a Computer Vision resource using the AzureRMR framework for interacting with Resource Manager. The available service tiers are `F0` (free, limited to 20 API calls per minute and 5k calls per month) and `S1` (up to 10 calls per second).
```r
library(AzureVision)
rg <- AzureRMR::get_azure_login("yourtenant")$
get_subscription("sub_id")$
get_resource_group("rgname")
res <- rg$create_cognitive_service("myvis",
service_type="ComputerVision", service_tier="S1")
```
## Client interface
To communicate with the Computer Vision service, call the `computervision_endpoint` function with the service URL and key. Rather than a key, you can also supply an OAuth token obtained with the AzureAuth package.
```r
url <- res$properties$endpoint
key <- res$list_keys()[1]
vis <- computervision_endpoint(url=url, key=key)
```
AzureVision supports all the Computer Vision API calls:
- `analyze` extracts visual features from the image. To obtain more detailed features, specify the `domain` and/or `feature_types` arguments as appropriate.
- `describe` attempts to provide a text description of the image.
- `detect_objects` detects objects in the image.
- `area_of_interest` attempts to find the "interesting" part of an image, meaning the most likely location of its subject.
- `tag` returns a set of words that are relevant to the content of the image. Not to be confused with the `add_tags` or `add_image_tags` functions that are part of the Custom Vision API.
- `categorize` attempts to place the image into a list of predefined categories.
- `read_text` performs optical character recognition (OCR) on the image.
- `list_computervision_domains` returns the predefined domain-specific models that can be queried by `analyze` for deeper analysis. Currently there are two domains: celebrities and landmarks.
- `make_thumbnail` generates a thumbnail of the image.
## Sample images
These are the images we'll use to illustrate how the package works.
|Filename|Description|Picture|
|:------:|:---------:|:-----:|
|`bill.jpg`|A portrait of Bill Gates|<img src="../inst/images/bill.jpg" width=300/>|
|`park.jpg`|A picture of a city park|<img src="../inst/images/park.jpg" width=300/>|
|`gettysburg.jpg`|The text of the Gettysburg Address|<img src="../inst/images/gettysburg.png" width=300/>|
An image to send to the endpoint can be specified as a filename, a publicly accessible Internet URL, or a raw vector. For example, these calls are equivalent, assuming the underlying image is the same:
```r
# from the Internet
analyze(vis, "https://example.com/foo.jpg")
# local file
analyze(vis, "~/pics/foo.jpg")
# read the picture into a raw vector
foo <- readBin("~/pics/foo.jpg", "raw", file.size("~/pics/foo.jpg"))
analyze(vis, foo)
```
## Calls
### `analyze`
```r
# analyze Bill's portrait
analyze(vis, "bill.jpg")
```
```
$categories
name score
1 people_ 0.953125
```
`analyze` has optional arguments `domain`, for choosing a domain-specific model with which to analyze the image; and `feature_types`, to specify additional details to return.
```r
analyze(vis, "bill.jpg", domain="celebrities")
```
```
$categories
name score celebrities
1 people_ 0.953125 Bill Gates, 0.999981284141541, 276, 139, 211, 211
```
```r
analyze(vis, "bill.jpg", feature_types=c("faces", "objects"))
```
```
$faces
age gender faceRectangle.left faceRectangle.top faceRectangle.width faceRectangle.height
1 50 Male 274 138 210 210
$objects
rectangle.x rectangle.y rectangle.w rectangle.h object confidence
1 308 444 102 243 tie 0.652
```
### `describe`
```r
describe(vis, "bill.jpg")
```
```
$tags
[1] "person" "man" "suit" "clothing" "wearing" "glasses" "holding" "standing" "looking"
[10] "front" "posing" "business" "older" "dressed" "sign" "smiling" "old" "black"
[19] "phone" "woman" "people"
$captions
text confidence
1 Bill Gates wearing a suit and tie 0.9933712
```
### `detect_objects`
```r
detect_objects(vis, "park.jpg")
```
```
rectangle.x rectangle.y rectangle.w rectangle.h object confidence parent.object parent.confidence
1 624 278 132 351 building 0.637 <NA> NA
2 3 22 314 843 tree 0.655 plant 0.658
3 749 353 284 380 building 0.544 <NA> NA
4 1011 0 989 918 tree 0.719 plant 0.757
```
### `area_of_interest`
```r
area_of_interest(vis, "bill.jpg")
```
```
x y w h
0 45 750 749
```
### `tag`
```r
head(tag(vis, "park.jpg"))
```
```
name confidence hint
1 grass 0.9999686 <NA>
2 tree 0.9996704 <NA>
3 outdoor 0.9990110 <NA>
4 flower 0.9853659 <NA>
5 park 0.8954747 <NA>
6 building 0.8255661 <NA>
```
### `categorize`
```r
categorize(vis, "bill.jpg")
```
```
name score
1 people_ 0.953125
```
### `read_text`
```r
read_text(vis, "gettysburg.png")
```
```
[[1]]
[1] "Four score and seven years ago our fathers brought forth on this continent, a new nation,"
[2] "conceived in Liberty, and dedicated to the proposition that all men are created equal."
[3] "Now we are engaged in a great civil war, testing whether that nation, or any nation so"
[4] "conceived and so dedicated, can long endure. We are met on a great battle-field of that war."
[5] "We have come to dedicate a portion of that field, as a final resting place for those who here"
[6] "gave their lives that that nation might live. It is altogether fitting and proper that we should"
[7] "do this."
[8] "But, in a larger sense, we can not dedicate—we can not consecrate —we can not hallow — this"
[9] "ground. The brave men, living and dead, who struggled here, have consecrated it, far above"
[10] "our poor power to add or detract. The world will little note, nor long remember what we say"
[11] "here, but it can never forget what they did here. It is for us the living, rather, to be dedicated"
[12] "here to the unfinished work which they who fought here have thus far so nobly advanced. It"
[13] "is rather for us to be here dedicated to the great task remaining before us — that from these"
[14] "honored dead we take increased devotion to that cause for which they gave the last full"
[15] "measure of devotion— that we here highly resolve that these dead shall not have died in"
[16] "vain— that this nation, under God, shall have a new birth of freedom— and that government"
[17] "of the people, by the people, for the people, shall not perish from the earth."
[18] "— Abraham Lincoln"
```
### `make_thumbnail`
```r
make_thumbnail(vis, "bill.jpg", "bill_thumb.jpg")
```
<img src="../inst/images/bill_thumb.jpg"/>
## See also
- [Computer Vision Docs page](https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/Home)
- [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/56f91f2e778daf14a499f21b)
| /scratch/gouwar.j/cran-all/cranData/AzureVision/inst/doc/computervision.Rmd |
---
title: "Creating and deploying a Custom Vision predictive service"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Custom Vision}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
The basic idea behind Custom Vision is to take a pre-built image recognition model supplied by Azure, and customise it for your needs by supplying a set of images with which to update it. All model training and prediction is done in the cloud, so you don't need a powerful machine. Similarly, since you are starting with a model that has already been trained, you don't need a very large dataset or long training times to obtain good predictions (ideally). This vignette walks you through the process of creating and deploying a Custom Vision predictive service.
## Creating the resources
You can create the Custom Vision resources using the AzureRMR framework for interacting with Resource Manager. Note that Custom Vision requires at least _two_ resources to be created: one for training, and one for prediction. The available service tiers for Custom Vision are `F0` (free, limited to 2 projects for training and 10k transactions/month for prediction) and `S0`.
```r
library(AzureVision)
rg <- AzureRMR::get_azure_login("yourtenant")$
get_subscription("sub_id")$
get_resource_group("rgname")
res <- rg$create_cognitive_service("mycustvis",
service_type="CustomVision.Training", service_tier="S0")
pred_res <- rg$create_cognitive_service("mycustvispred",
service_type="CustomVision.Prediction", service_tier="S0")
```
## Training
Custom Vision defines two different types of endpoint: a training endpoint, and a prediction endpoint. Somewhat confusingly, they can both share the same hostname, but use different paths and authentication keys. To start, call the `customvision_training_endpoint` function with the service URL and key.
```r
url <- res$properties$endpoint
key <- res$list_keys()[1]
endp <- customvision_training_endpoint(url=url, key=key)
```
### Creating the project
Custom Vision is organised hierarchically. At the top level, we have a _project_, which represents the data and model for a specific task. Within a project, we have one or more _iterations_ of the model, built on different sets of training images. Each iteration in a project is independent: you can create (train) an iteration, deploy it, and delete it without affecting other iterations.
You can see the projects that currently exist on the endpoint by calling `list_projects`. This returns a named list of project objects:
```r
list_projects(endp)
```
```
$general_compact
Azure Custom Vision project 'general_compact' (304fc776-d860-490a-b4ec-5964bb134743)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: classification.general.compact (0732100f-1a38-4e49-a514-c9b44c697ab5)
Export target: standard
Classification type: Multiclass
$general_multilabel
Azure Custom Vision project 'general_multilabel' (c485f10b-cb54-47a3-b585-624488335f58)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: classification.general (ee85a74c-405e-4adc-bb47-ffa8ca0c9f31)
Export target: none
Classification type: Multilabel
$logo_obj
Azure Custom Vision project 'logo_obj' (af82557f-6ead-401c-afd6-bb9d5a3b042b)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: object_detection.logo (1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4)
Export target: none
Classification type: NA
```
There are three different types of projects, as implied by the list above:
- A _multiclass classification_ project is for classifying images into a set of _tags_, or target labels. An image can be assigned to one tag only.
- A _multilabel classification_ project is similar, but each image can have multiple tags assigned to it.
- An _object detection_ project is for detecting which objects, if any, from a set of candidates are present in an image.
The functions to create these projects are `create_classification_project` (which is used to create both multiclass and multilabel projects) and `create_object_detection_project`. Let's create a classification project:
```r
testproj <- create_classification_project(endp, "testproj", export_target="standard")
testproj
```
```
Azure Custom Vision project 'testproj' (db368447-e5da-4cd7-8799-0ccd8157323e)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: classification.general.compact (0732100f-1a38-4e49-a514-c9b44c697ab5)
Export target: standard
Classification type: Multiclass
```
Here, we specify the export target to be `standard` to support exporting the final model to one of various standalone formats, eg TensorFlow, CoreML or ONNX. The default is `none`, in which case the model stays on the Custom Vision server. The advantage of `none` is that the model can be more complex, resulting in potentially better accuracy. The type of project is multiclass classification, and the domain (the initial model used as the basis for training) is `general`. Other possible domains for classification include `landmarks` and `retail`.
### Adding and tagging images
Since a Custom Vision model is trained in Azure and not locally, we need to upload some images. The data we'll use comes from the Microsoft [Computer Vision Best Practices](https://github.com/microsoft/computervision-recipes) project. This is a simple set of images containing 4 kinds of objects one might find in a fridge: cans, cartons, milk bottles, and water bottles.
```r
download.file(
"https://cvbp.blob.core.windows.net/public/datasets/image_classification/fridgeObjects.zip",
"fridgeObjects.zip"
)
unzip("fridgeObjects.zip")
```
The generic function to add images to a project is `add_images`, which takes a vector of filenames, Internet URLs or raw vectors as the images to upload. The method for classification projects also has an argument `tags` which can be used to assign labels to the images as they are uploaded.
`add_images` returns a vector of _image IDs_, which are how Custom Vision keeps track of the images it uses. It should be noted that Custom Vision does not keep a record of the source filename or URL; it works _only_ with image IDs. A future release of AzureVision may automatically track the source metadata, allowing you to associate an ID with an actual image. For now, this must be done manually.
Let's upload the fridge objects to the project. We'll keep aside 5 images from each class of object to use as validation data.
```r
cans <- dir("fridgeObjects/can", full.names=TRUE)
cartons <- dir("fridgeObjects/carton", full.names=TRUE)
milk <- dir("fridgeObjects/milk_bottle", full.names=TRUE)
water <- dir("fridgeObjects/water_bottle", full.names=TRUE)
# upload all but 5 images from cans and cartons, and tag them
can_ids <- add_images(testproj, cans[-(1:5)], tags="can")
carton_ids <- add_images(testproj, cartons[-(1:5)], tags="carton")
```
If you don't tag the images at upload time, you can do so later with `add_image_tags`:
```r
# upload all but 5 images from milk and water bottles
milk_ids <- add_images(testproj, milk[-(1:5)])
water_ids <- add_images(testproj, water[-(1:5)])
add_image_tags(testproj, milk_ids, tags="milk_bottle")
add_image_tags(testproj, water_ids, tags="water_bottle")
```
Other image functions to be aware of include `list_images`, `remove_images`, and `add_image_regions` (which is for object detection projects). A useful one is `browse_images`, which takes a vector of IDs and displays the corresponding images in your browser.
```r
browse_images(testproj, water_ids[1:5])
```
### Training the model
Having uploaded the data, we can train the Custom Vision model with `train_model`. This trains the model on the server and returns a _model iteration_, which is the result of running the training algorithm on the current set of images. Each time you call `train_model`, for example to update the model after adding or removing images, you will obtain a different model iteration. In general, you can rely on AzureVision to keep track of the iterations for you, and automatically return the relevant results for the latest iteration.
```r
mod <- train_model(testproj)
mod
```
```
Azure Custom Vision model
Project/iteration: testproj/Iteration 1 (f243bb4c-e4f8-473e-9df0-190a407472be)
```
Optional arguments to `train_model` include:
- `training_method`: Set this to "advanced" to force Custom Vision to do the training from scratch, rather than simply updating a pre-trained model. This also enables the other arguments below.
- `max_time`: If `training_method == "advanced"`, the maximum runtime in hours for training the model. The default is 1 hour.
- `force`: If `training_method == "advanced"`, whether to train the model anyway even if the images have not changed.
- `email`: If `training_method == "advanced"`, an optional email address to send a notification to when the training is complete.
- `wait`: Whether to wait until training completes before returning.
Other model iteration management functions are `get_model` (to retrieve a previously trained iteration), `list_models` (retrieve all previously trained iterations), and `delete_model`.
We can examine the model performance on the training data (which may be different to the current data!) with the `summary` method. For this toy problem, the model manages to obtain a perfect fit.
```r
summary(mod)
```
```
$perTagPerformance
id name precision precisionStdDeviation recall
1 22ddd4bc-2031-43a1-b0ef-eb6b219eb6f7 can 1 0 1
2 301db6f9-b701-4dc6-8650-a9cf3fe4bb2e carton 1 0 1
3 594ad770-83e5-4c77-825d-9249dae4a2c6 milk_bottle 1 0 1
4 eda5869a-cc75-41df-9c4c-717c10f79739 water_bottle 1 0 1
recallStdDeviation averagePrecision
1 0 1
2 0 1
3 0 1
4 0 1
$precision
[1] 1
$precisionStdDeviation
[1] 0
$recall
[1] 1
$recallStdDeviation
[1] 0
$averagePrecision
[1] 1
```
Obtaining predictions from the trained model is done with the `predict` method. By default, this returns the predicted tag (class label) for the image, but you can also get the predicted class probabilities by specifying `type="prob"`.
```r
validation_imgs <- c(cans[1:5], cartons[1:5], milk[1:5], water[1:5])
validation_tags <- rep(c("can", "carton", "milk_bottle", "water_bottle"), each=5)
predicted_tags <- predict(mod, validation_imgs)
table(predicted_tags, validation_tags)
```
```
validation_tags
predicted_tags can carton milk_bottle water_bottle
can 4 0 0 0
carton 0 5 0 0
milk_bottle 1 0 5 0
water_bottle 0 0 0 5
```
```r
head(predict(mod, validation_imgs, type="prob"))
```
```
can carton milk_bottle water_bottle
[1,] 9.999968e-01 8.977501e-08 5.855104e-11 3.154334e-06
[2,] 9.732912e-01 3.454168e-10 4.610847e-06 2.670425e-02
[3,] 3.019476e-01 5.779990e-04 6.974699e-01 4.506565e-06
[4,] 5.072662e-01 2.849253e-03 4.856858e-01 4.198686e-03
[5,] 9.962270e-01 5.411842e-07 3.540882e-03 2.316211e-04
[6,] 3.145034e-11 1.000000e+00 2.574793e-10 4.242047e-14
```
This shows that the model got 19 out of 20 predictions correct on the validation data, misclassifying one of the cans as a milk bottle.
## Deployment
### Publishing to a prediction resource
The code above demonstrates using the training endpoint to obtain predictions, which is really meant only for model testing and validation. For production purposes, we would normally _publish_ a trained model to a Custom Vision prediction resource. Among other things, a user with access to the training endpoint has complete freedom to modify the model and the data, whereas access to the prediction endpoint only allows getting predictions.
Publishing a model requires knowing the Azure resource ID of the prediction resource. Here, we'll use the resource object that was created earlier using AzureRMR; you can also obtain this information from the Azure Portal.
```r
# publish to the prediction resource we created above
publish_model(mod, "iteration1", pred_res)
```
Once a model has been published, we can obtain predictions from the prediction endpoint in a manner very similar to previously. We create a predictive service object with `classification_service`, and then call the `predict` method. Note that a required input is the project ID; you can supply this directly or via the project object.
```r
pred_url <- pred_res$properties$endpoint
pred_key <- pred_res$list_keys()[1]
pred_endp <- customvision_prediction_endpoint(url=pred_url, key=pred_key)
project_id <- testproj$project$id
pred_svc <- classification_service(pred_endp, project_id, "iteration1")
# predictions from prediction endpoint -- same as before
predsvc_tags <- predict(pred_svc, validation_imgs)
table(predsvc_tags, validation_tags)
```
```
validation_tags
predsvc_tags can carton milk_bottle water_bottle
can 4 0 0 0
carton 0 5 0 0
milk_bottle 1 0 5 0
water_bottle 0 0 0 5
```
### Exporting as standalone
As an alternative to deploying the model to an online predictive service resource, you can also export the model to a standalone format. This is only possible if the project was created to support exporting. The formats supported include:
- ONNX 1.2
- CoreML
- TensorFlow or TensorFlow Lite
- A Docker image for either the Linux, Windows or Raspberry Pi environment
- Vision AI Development Kit (VAIDK)
To export the model, call `export_model` and specify the target format. By default, the model will be downloaded to your local machine, but `export_model` also (invisibly) returns a URL from where it can be downloaded independently.
```r
export_model(mod, "tensorflow")
```
```
Downloading to f243bb4c-e4f8-473e-9df0-190a407472be.TensorFlow.zip
trying URL 'https://irisprodae...'
Content type 'application/octet-stream' length 4673656 bytes (4.5 MB)
downloaded 4.5 MB
```
## See also
- [CustomVision.ai](https://www.customvision.ai/): An interactive site for building Custom Vision models, provided by Microsoft
- [Training API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b7fa)
- [Prediction API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)
| /scratch/gouwar.j/cran-all/cranData/AzureVision/inst/doc/customvision.Rmd |
---
title: "Using the Computer Vision service"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Computer Vision}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
The Computer Vision service provides developers with access to advanced algorithms that process images and return information, depending on the visual features you're interested in. For example, Computer Vision can determine if an image contains adult content, or it can find all of the human faces in an image.
## Creating the resources
You can create a Computer Vision resource using the AzureRMR framework for interacting with Resource Manager. The available service tiers are `F0` (free, limited to 20 API calls per minute and 5k calls per month) and `S1` (up to 10 calls per second).
```r
library(AzureVision)
rg <- AzureRMR::get_azure_login("yourtenant")$
get_subscription("sub_id")$
get_resource_group("rgname")
res <- rg$create_cognitive_service("myvis",
service_type="ComputerVision", service_tier="S1")
```
## Client interface
To communicate with the Computer Vision service, call the `computervision_endpoint` function with the service URL and key. Rather than a key, you can also supply an OAuth token obtained with the AzureAuth package.
```r
url <- res$properties$endpoint
key <- res$list_keys()[1]
vis <- computervision_endpoint(url=url, key=key)
```
AzureVision supports all the Computer Vision API calls:
- `analyze` extracts visual features from the image. To obtain more detailed features, specify the `domain` and/or `feature_types` arguments as appropriate.
- `describe` attempts to provide a text description of the image.
- `detect_objects` detects objects in the image.
- `area_of_interest` attempts to find the "interesting" part of an image, meaning the most likely location of its subject.
- `tag` returns a set of words that are relevant to the content of the image. Not to be confused with the `add_tags` or `add_image_tags` functions that are part of the Custom Vision API.
- `categorize` attempts to place the image into a list of predefined categories.
- `read_text` performs optical character recognition (OCR) on the image.
- `list_computervision_domains` returns the predefined domain-specific models that can be queried by `analyze` for deeper analysis. Currently there are two domains: celebrities and landmarks.
- `make_thumbnail` generates a thumbnail of the image.
## Sample images
These are the images we'll use to illustrate how the package works.
|Filename|Description|Picture|
|:------:|:---------:|:-----:|
|`bill.jpg`|A portrait of Bill Gates|<img src="../inst/images/bill.jpg" width=300/>|
|`park.jpg`|A picture of a city park|<img src="../inst/images/park.jpg" width=300/>|
|`gettysburg.jpg`|The text of the Gettysburg Address|<img src="../inst/images/gettysburg.png" width=300/>|
An image to send to the endpoint can be specified as a filename, a publicly accessible Internet URL, or a raw vector. For example, these calls are equivalent, assuming the underlying image is the same:
```r
# from the Internet
analyze(vis, "https://example.com/foo.jpg")
# local file
analyze(vis, "~/pics/foo.jpg")
# read the picture into a raw vector
foo <- readBin("~/pics/foo.jpg", "raw", file.size("~/pics/foo.jpg"))
analyze(vis, foo)
```
## Calls
### `analyze`
```r
# analyze Bill's portrait
analyze(vis, "bill.jpg")
```
```
$categories
name score
1 people_ 0.953125
```
`analyze` has optional arguments `domain`, for choosing a domain-specific model with which to analyze the image; and `feature_types`, to specify additional details to return.
```r
analyze(vis, "bill.jpg", domain="celebrities")
```
```
$categories
name score celebrities
1 people_ 0.953125 Bill Gates, 0.999981284141541, 276, 139, 211, 211
```
```r
analyze(vis, "bill.jpg", feature_types=c("faces", "objects"))
```
```
$faces
age gender faceRectangle.left faceRectangle.top faceRectangle.width faceRectangle.height
1 50 Male 274 138 210 210
$objects
rectangle.x rectangle.y rectangle.w rectangle.h object confidence
1 308 444 102 243 tie 0.652
```
### `describe`
```r
describe(vis, "bill.jpg")
```
```
$tags
[1] "person" "man" "suit" "clothing" "wearing" "glasses" "holding" "standing" "looking"
[10] "front" "posing" "business" "older" "dressed" "sign" "smiling" "old" "black"
[19] "phone" "woman" "people"
$captions
text confidence
1 Bill Gates wearing a suit and tie 0.9933712
```
### `detect_objects`
```r
detect_objects(vis, "park.jpg")
```
```
rectangle.x rectangle.y rectangle.w rectangle.h object confidence parent.object parent.confidence
1 624 278 132 351 building 0.637 <NA> NA
2 3 22 314 843 tree 0.655 plant 0.658
3 749 353 284 380 building 0.544 <NA> NA
4 1011 0 989 918 tree 0.719 plant 0.757
```
### `area_of_interest`
```r
area_of_interest(vis, "bill.jpg")
```
```
x y w h
0 45 750 749
```
### `tag`
```r
head(tag(vis, "park.jpg"))
```
```
name confidence hint
1 grass 0.9999686 <NA>
2 tree 0.9996704 <NA>
3 outdoor 0.9990110 <NA>
4 flower 0.9853659 <NA>
5 park 0.8954747 <NA>
6 building 0.8255661 <NA>
```
### `categorize`
```r
categorize(vis, "bill.jpg")
```
```
name score
1 people_ 0.953125
```
### `read_text`
```r
read_text(vis, "gettysburg.png")
```
```
[[1]]
[1] "Four score and seven years ago our fathers brought forth on this continent, a new nation,"
[2] "conceived in Liberty, and dedicated to the proposition that all men are created equal."
[3] "Now we are engaged in a great civil war, testing whether that nation, or any nation so"
[4] "conceived and so dedicated, can long endure. We are met on a great battle-field of that war."
[5] "We have come to dedicate a portion of that field, as a final resting place for those who here"
[6] "gave their lives that that nation might live. It is altogether fitting and proper that we should"
[7] "do this."
[8] "But, in a larger sense, we can not dedicate—we can not consecrate —we can not hallow — this"
[9] "ground. The brave men, living and dead, who struggled here, have consecrated it, far above"
[10] "our poor power to add or detract. The world will little note, nor long remember what we say"
[11] "here, but it can never forget what they did here. It is for us the living, rather, to be dedicated"
[12] "here to the unfinished work which they who fought here have thus far so nobly advanced. It"
[13] "is rather for us to be here dedicated to the great task remaining before us — that from these"
[14] "honored dead we take increased devotion to that cause for which they gave the last full"
[15] "measure of devotion— that we here highly resolve that these dead shall not have died in"
[16] "vain— that this nation, under God, shall have a new birth of freedom— and that government"
[17] "of the people, by the people, for the people, shall not perish from the earth."
[18] "— Abraham Lincoln"
```
### `make_thumbnail`
```r
make_thumbnail(vis, "bill.jpg", "bill_thumb.jpg")
```
<img src="../inst/images/bill_thumb.jpg"/>
## See also
- [Computer Vision Docs page](https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/Home)
- [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/56f91f2e778daf14a499f21b)
| /scratch/gouwar.j/cran-all/cranData/AzureVision/vignettes/computervision.Rmd |
---
title: "Creating and deploying a Custom Vision predictive service"
author: Hong Ooi
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Custom Vision}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{utf8}
---
The basic idea behind Custom Vision is to take a pre-built image recognition model supplied by Azure, and customise it for your needs by supplying a set of images with which to update it. All model training and prediction is done in the cloud, so you don't need a powerful machine. Similarly, since you are starting with a model that has already been trained, you don't need a very large dataset or long training times to obtain good predictions (ideally). This vignette walks you through the process of creating and deploying a Custom Vision predictive service.
## Creating the resources
You can create the Custom Vision resources using the AzureRMR framework for interacting with Resource Manager. Note that Custom Vision requires at least _two_ resources to be created: one for training, and one for prediction. The available service tiers for Custom Vision are `F0` (free, limited to 2 projects for training and 10k transactions/month for prediction) and `S0`.
```r
library(AzureVision)
rg <- AzureRMR::get_azure_login("yourtenant")$
get_subscription("sub_id")$
get_resource_group("rgname")
res <- rg$create_cognitive_service("mycustvis",
service_type="CustomVision.Training", service_tier="S0")
pred_res <- rg$create_cognitive_service("mycustvispred",
service_type="CustomVision.Prediction", service_tier="S0")
```
## Training
Custom Vision defines two different types of endpoint: a training endpoint, and a prediction endpoint. Somewhat confusingly, they can both share the same hostname, but use different paths and authentication keys. To start, call the `customvision_training_endpoint` function with the service URL and key.
```r
url <- res$properties$endpoint
key <- res$list_keys()[1]
endp <- customvision_training_endpoint(url=url, key=key)
```
### Creating the project
Custom Vision is organised hierarchically. At the top level, we have a _project_, which represents the data and model for a specific task. Within a project, we have one or more _iterations_ of the model, built on different sets of training images. Each iteration in a project is independent: you can create (train) an iteration, deploy it, and delete it without affecting other iterations.
You can see the projects that currently exist on the endpoint by calling `list_projects`. This returns a named list of project objects:
```r
list_projects(endp)
```
```
$general_compact
Azure Custom Vision project 'general_compact' (304fc776-d860-490a-b4ec-5964bb134743)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: classification.general.compact (0732100f-1a38-4e49-a514-c9b44c697ab5)
Export target: standard
Classification type: Multiclass
$general_multilabel
Azure Custom Vision project 'general_multilabel' (c485f10b-cb54-47a3-b585-624488335f58)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: classification.general (ee85a74c-405e-4adc-bb47-ffa8ca0c9f31)
Export target: none
Classification type: Multilabel
$logo_obj
Azure Custom Vision project 'logo_obj' (af82557f-6ead-401c-afd6-bb9d5a3b042b)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: object_detection.logo (1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4)
Export target: none
Classification type: NA
```
There are three different types of projects, as implied by the list above:
- A _multiclass classification_ project is for classifying images into a set of _tags_, or target labels. An image can be assigned to one tag only.
- A _multilabel classification_ project is similar, but each image can have multiple tags assigned to it.
- An _object detection_ project is for detecting which objects, if any, from a set of candidates are present in an image.
The functions to create these projects are `create_classification_project` (which is used to create both multiclass and multilabel projects) and `create_object_detection_project`. Let's create a classification project:
```r
testproj <- create_classification_project(endp, "testproj", export_target="standard")
testproj
```
```
Azure Custom Vision project 'testproj' (db368447-e5da-4cd7-8799-0ccd8157323e)
Endpoint: https://australiaeast.api.cognitive.microsoft.com/customvision/v3.0
Domain: classification.general.compact (0732100f-1a38-4e49-a514-c9b44c697ab5)
Export target: standard
Classification type: Multiclass
```
Here, we specify the export target to be `standard` to support exporting the final model to one of various standalone formats, eg TensorFlow, CoreML or ONNX. The default is `none`, in which case the model stays on the Custom Vision server. The advantage of `none` is that the model can be more complex, resulting in potentially better accuracy. The type of project is multiclass classification, and the domain (the initial model used as the basis for training) is `general`. Other possible domains for classification include `landmarks` and `retail`.
### Adding and tagging images
Since a Custom Vision model is trained in Azure and not locally, we need to upload some images. The data we'll use comes from the Microsoft [Computer Vision Best Practices](https://github.com/microsoft/computervision-recipes) project. This is a simple set of images containing 4 kinds of objects one might find in a fridge: cans, cartons, milk bottles, and water bottles.
```r
download.file(
"https://cvbp.blob.core.windows.net/public/datasets/image_classification/fridgeObjects.zip",
"fridgeObjects.zip"
)
unzip("fridgeObjects.zip")
```
The generic function to add images to a project is `add_images`, which takes a vector of filenames, Internet URLs or raw vectors as the images to upload. The method for classification projects also has an argument `tags` which can be used to assign labels to the images as they are uploaded.
`add_images` returns a vector of _image IDs_, which are how Custom Vision keeps track of the images it uses. It should be noted that Custom Vision does not keep a record of the source filename or URL; it works _only_ with image IDs. A future release of AzureVision may automatically track the source metadata, allowing you to associate an ID with an actual image. For now, this must be done manually.
Let's upload the fridge objects to the project. We'll keep aside 5 images from each class of object to use as validation data.
```r
cans <- dir("fridgeObjects/can", full.names=TRUE)
cartons <- dir("fridgeObjects/carton", full.names=TRUE)
milk <- dir("fridgeObjects/milk_bottle", full.names=TRUE)
water <- dir("fridgeObjects/water_bottle", full.names=TRUE)
# upload all but 5 images from cans and cartons, and tag them
can_ids <- add_images(testproj, cans[-(1:5)], tags="can")
carton_ids <- add_images(testproj, cartons[-(1:5)], tags="carton")
```
If you don't tag the images at upload time, you can do so later with `add_image_tags`:
```r
# upload all but 5 images from milk and water bottles
milk_ids <- add_images(testproj, milk[-(1:5)])
water_ids <- add_images(testproj, water[-(1:5)])
add_image_tags(testproj, milk_ids, tags="milk_bottle")
add_image_tags(testproj, water_ids, tags="water_bottle")
```
Other image functions to be aware of include `list_images`, `remove_images`, and `add_image_regions` (which is for object detection projects). A useful one is `browse_images`, which takes a vector of IDs and displays the corresponding images in your browser.
```r
browse_images(testproj, water_ids[1:5])
```
### Training the model
Having uploaded the data, we can train the Custom Vision model with `train_model`. This trains the model on the server and returns a _model iteration_, which is the result of running the training algorithm on the current set of images. Each time you call `train_model`, for example to update the model after adding or removing images, you will obtain a different model iteration. In general, you can rely on AzureVision to keep track of the iterations for you, and automatically return the relevant results for the latest iteration.
```r
mod <- train_model(testproj)
mod
```
```
Azure Custom Vision model
Project/iteration: testproj/Iteration 1 (f243bb4c-e4f8-473e-9df0-190a407472be)
```
Optional arguments to `train_model` include:
- `training_method`: Set this to "advanced" to force Custom Vision to do the training from scratch, rather than simply updating a pre-trained model. This also enables the other arguments below.
- `max_time`: If `training_method == "advanced"`, the maximum runtime in hours for training the model. The default is 1 hour.
- `force`: If `training_method == "advanced"`, whether to train the model anyway even if the images have not changed.
- `email`: If `training_method == "advanced"`, an optional email address to send a notification to when the training is complete.
- `wait`: Whether to wait until training completes before returning.
Other model iteration management functions are `get_model` (to retrieve a previously trained iteration), `list_models` (retrieve all previously trained iterations), and `delete_model`.
We can examine the model performance on the training data (which may be different to the current data!) with the `summary` method. For this toy problem, the model manages to obtain a perfect fit.
```r
summary(mod)
```
```
$perTagPerformance
id name precision precisionStdDeviation recall
1 22ddd4bc-2031-43a1-b0ef-eb6b219eb6f7 can 1 0 1
2 301db6f9-b701-4dc6-8650-a9cf3fe4bb2e carton 1 0 1
3 594ad770-83e5-4c77-825d-9249dae4a2c6 milk_bottle 1 0 1
4 eda5869a-cc75-41df-9c4c-717c10f79739 water_bottle 1 0 1
recallStdDeviation averagePrecision
1 0 1
2 0 1
3 0 1
4 0 1
$precision
[1] 1
$precisionStdDeviation
[1] 0
$recall
[1] 1
$recallStdDeviation
[1] 0
$averagePrecision
[1] 1
```
Obtaining predictions from the trained model is done with the `predict` method. By default, this returns the predicted tag (class label) for the image, but you can also get the predicted class probabilities by specifying `type="prob"`.
```r
validation_imgs <- c(cans[1:5], cartons[1:5], milk[1:5], water[1:5])
validation_tags <- rep(c("can", "carton", "milk_bottle", "water_bottle"), each=5)
predicted_tags <- predict(mod, validation_imgs)
table(predicted_tags, validation_tags)
```
```
validation_tags
predicted_tags can carton milk_bottle water_bottle
can 4 0 0 0
carton 0 5 0 0
milk_bottle 1 0 5 0
water_bottle 0 0 0 5
```
```r
head(predict(mod, validation_imgs, type="prob"))
```
```
can carton milk_bottle water_bottle
[1,] 9.999968e-01 8.977501e-08 5.855104e-11 3.154334e-06
[2,] 9.732912e-01 3.454168e-10 4.610847e-06 2.670425e-02
[3,] 3.019476e-01 5.779990e-04 6.974699e-01 4.506565e-06
[4,] 5.072662e-01 2.849253e-03 4.856858e-01 4.198686e-03
[5,] 9.962270e-01 5.411842e-07 3.540882e-03 2.316211e-04
[6,] 3.145034e-11 1.000000e+00 2.574793e-10 4.242047e-14
```
This shows that the model got 19 out of 20 predictions correct on the validation data, misclassifying one of the cans as a milk bottle.
## Deployment
### Publishing to a prediction resource
The code above demonstrates using the training endpoint to obtain predictions, which is really meant only for model testing and validation. For production purposes, we would normally _publish_ a trained model to a Custom Vision prediction resource. Among other things, a user with access to the training endpoint has complete freedom to modify the model and the data, whereas access to the prediction endpoint only allows getting predictions.
Publishing a model requires knowing the Azure resource ID of the prediction resource. Here, we'll use the resource object that was created earlier using AzureRMR; you can also obtain this information from the Azure Portal.
```r
# publish to the prediction resource we created above
publish_model(mod, "iteration1", pred_res)
```
Once a model has been published, we can obtain predictions from the prediction endpoint in a manner very similar to previously. We create a predictive service object with `classification_service`, and then call the `predict` method. Note that a required input is the project ID; you can supply this directly or via the project object.
```r
pred_url <- pred_res$properties$endpoint
pred_key <- pred_res$list_keys()[1]
pred_endp <- customvision_prediction_endpoint(url=pred_url, key=pred_key)
project_id <- testproj$project$id
pred_svc <- classification_service(pred_endp, project_id, "iteration1")
# predictions from prediction endpoint -- same as before
predsvc_tags <- predict(pred_svc, validation_imgs)
table(predsvc_tags, validation_tags)
```
```
validation_tags
predsvc_tags can carton milk_bottle water_bottle
can 4 0 0 0
carton 0 5 0 0
milk_bottle 1 0 5 0
water_bottle 0 0 0 5
```
### Exporting as standalone
As an alternative to deploying the model to an online predictive service resource, you can also export the model to a standalone format. This is only possible if the project was created to support exporting. The formats supported include:
- ONNX 1.2
- CoreML
- TensorFlow or TensorFlow Lite
- A Docker image for either the Linux, Windows or Raspberry Pi environment
- Vision AI Development Kit (VAIDK)
To export the model, call `export_model` and specify the target format. By default, the model will be downloaded to your local machine, but `export_model` also (invisibly) returns a URL from where it can be downloaded independently.
```r
export_model(mod, "tensorflow")
```
```
Downloading to f243bb4c-e4f8-473e-9df0-190a407472be.TensorFlow.zip
trying URL 'https://irisprodae...'
Content type 'application/octet-stream' length 4673656 bytes (4.5 MB)
downloaded 4.5 MB
```
## See also
- [CustomVision.ai](https://www.customvision.ai/): An interactive site for building Custom Vision models, provided by Microsoft
- [Training API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.0/operations/5c771cdcbf6a2b18a0c3b7fa)
- [Prediction API reference](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15)
| /scratch/gouwar.j/cran-all/cranData/AzureVision/vignettes/customvision.Rmd |
# This file contains no code: the bundle formerly known as BACCO is
# now a 'virtual' package which requires the constituent packages:
# emulator, calibrator, and approximator.
| /scratch/gouwar.j/cran-all/cranData/BACCO/R/BACCO.R |
### R code from vignette source 'both_papers.Rnw'
###################################################
### code chunk number 1: both_papers.Rnw:409-409
###################################################
###################################################
### code chunk number 2: both_papers.Rnw:410-413
###################################################
set.seed(0)
require(BACCO)
data(toy)
###################################################
### code chunk number 3: both_papers.Rnw:416-418
###################################################
toy <- latin.hypercube(20,6)
head(toy)
###################################################
### code chunk number 4: both_papers.Rnw:427-429
###################################################
f <- function(x) sum( (0:6)*x)
expectation <- apply(regressor.multi(toy), 1, f)
###################################################
### code chunk number 5: both_papers.Rnw:437-440
###################################################
toy.scales <- rep(1,6)
toy.sigma <- 0.4
A <- toy.sigma*corr.matrix(xold=toy, scales=toy.scales)
###################################################
### code chunk number 6: both_papers.Rnw:448-449
###################################################
d <- as.vector(rmvnorm(n=1 , mean=expectation , sigma=A))
###################################################
### code chunk number 7: both_papers.Rnw:457-460
###################################################
x.unknown <- rep(0.5 , 6)
jj <- interpolant(x.unknown, d, toy, scales=toy.scales, g=TRUE)
print(drop(jj$mstar.star))
###################################################
### code chunk number 8: both_papers.Rnw:468-469
###################################################
print(jj$betahat)
###################################################
### code chunk number 9: both_papers.Rnw:473-474
###################################################
print(jj$beta.marginal.sd)
###################################################
### code chunk number 10: both_papers.Rnw:500-502
###################################################
scales.optim <- optimal.scales(val=toy, scales.start=rep(1,6), d=d, give=FALSE)
print(scales.optim)
###################################################
### code chunk number 11: both_papers.Rnw:515-517
###################################################
interpolant(x.unknown, d, toy, scales=toy.scales , g=FALSE)
interpolant(x.unknown, d, toy, scales=scales.optim, g=FALSE)
###################################################
### code chunk number 12: both_papers.Rnw:564-639
###################################################
data(results.table)
data(expert.estimates)
# Decide which column we are interested in:
output.col <- 26
wanted.cols <- c(2:9,12:19)
# Decide how many to keep;
# 30-40 is about the most we can handle:
wanted.row <- 1:27
# Values to use are the ones that appear in goin.test2.comments:
val <- results.table[wanted.row , wanted.cols]
# Now normalize val so that 0<results.table[,i]<1 for all i:
normalize <- function(x){(x-mins)/(maxes-mins)}
unnormalize <- function(x){mins + (maxes-mins)*x}
mins <- expert.estimates$low
maxes <- expert.estimates$high
jj <- t(apply(val,1,normalize))
val <- as.matrix(jj)
## Answer is the 19th (or 20th or ... or 26th)
d <- results.table[wanted.row , output.col]
scales.optim <-
c(0.054095, 0.007055, 0.034944, 10.772536, 0.085691, 0.144568, 0.033540,
0.641465, 0.235039, 0.046189, 0.949328, 0.055576, 0.058894, 0.098077,
0.045411, 0.167629)
A <- corr.matrix(val,scales=rep(1,ncol(val)))
Ainv <- solve(A)
## Now try to find the best correlation lengths:
## the idea is to minimize the maximum absolute deviation
## from the known points.
d.observed <- results.table[, output.col]
A <- corr.matrix(val,scales=scales.optim)
Ainv <- solve(A)
design.normalized <- as.matrix(t(apply(results.table[,wanted.cols],1,normalize)))
jj.preds <- interpolant.quick(design.normalized, d, val, Ainv, give.Z=TRUE,
scales=scales.optim)
d.predicted <- jj.preds$mstar.star
d.errors <- jj.preds$Z
jj <- range(c(d.observed,d.predicted))
par(pty="s")
plot(d.observed, d.predicted, pch=16, asp=1,
xlim=jj,ylim=jj,
xlab=expression(paste(temperature," (",{}^o,C,"), model" )),
ylab=expression(paste(temperature," (",{}^o,C,"), emulator"))
)
errorbar <- function(x,y,delta,serifwidth=7,...){
if(abs(delta)<1e-5){return()}
lines(x=c(x,x),y=c(y-delta,y+delta), ...)
lines(x=c(x-serifwidth/2,x+serifwidth/2),
y=c(y+delta,y+delta), ...
)
lines(x=c(x-serifwidth/2,x+serifwidth/2),
y=c(y-delta,y-delta), ...
)
}
for(i in 1:length(d.observed)){
errorbar(x=d.observed[i],y=d.predicted[i],delta=qt(0.975,df=11)*d.errors[i],serifwidth=0.1,col="red")
}
abline(0,1)
###################################################
### code chunk number 13: both_papers.Rnw:898-899
###################################################
args(ht.fun)
###################################################
### code chunk number 14: both_papers.Rnw:1249-1264
###################################################
load.the.files <- TRUE
#library(goldstein)
if(load.the.files){
load("e10000")
load("temps.jj")
o <- order(temps.jj)
load("probs.from.prior")
j0 <- j0-min(j0)
}
x.pp <- cumsum(exp(j0[o] ))
plot(temps.jj[o],x.pp/max(x.pp),ylab="cumulative probability",xlab="temperature (C)",
type="l",lty=1,main="Prior and posterior CDF for temperature in Northern Europe")
points(sort(temps.jj),(1:9000)/9000,type="l",lty=2)
legend(0,0.95,legend=c("posterior","prior"),lty=1:2)
###################################################
### code chunk number 15: SetTheBib
###################################################
bib <- system.file( "doc", "uncertainty.bib", package = "emulator" )
bib <- sub('.bib$','',bib)
###################################################
### code chunk number 16: usethebib
###################################################
cat( "\\bibliography{",bib,"}\n",sep='')
| /scratch/gouwar.j/cran-all/cranData/BACCO/inst/doc/both_papers.R |
#' @details JAGS software can be downloaded from \url{http://mcmc-jags.sourceforge.net/}.
#' @examples \dontrun{
#' library(BACCT)
#'
#' #############################
#' #Example for binary response#
#' #############################
#'
#' #specify historical data
#' yh = c(11,305,52);nh = c(45,874,120)
#' #specify subjects
#' n1 = 20;n2 = 30
#' #implement BAC and wait patiently
#' post = BAC_binom(yh=yh,nh=nh,n1=n1,n2=n2,n.chain=5,
#' criterion.type="diff",sim.mode="express")
#' #evaluate the decision
#' rule1 = decision_eval(object=post,decision.rule=c(0.05,0.05),
#' control.range=seq(0.3,0.5,0.01),es=c(0,0.1,0.15),csv.name="rule1.csv")
#' #plot the decision evaluation
#' (fig1 = plot(rule1))
#' #continue polishing the figure
#' #add data points
#' fig1 + geom_point(size=4)
#' #replace the title
#' fig1 + ggtitle("replace title")
#' #add reference lines
#' fig1 + geom_hline(aes(yintercept=0.05)) +
#' geom_vline(aes(xintercept=0.42),color="black",type="dashed")
#' }
#'
#' @references Viele, et al., "Use of historical control data for assessing
#' treatment effects in clinical trials." Pharmaceutical statistics 13(1)
#' (2014): 41-54.
"_PACKAGE"
| /scratch/gouwar.j/cran-all/cranData/BACCT/R/BACCT.R |
#' @title Bayesian Augmented Control for Binary Responses
#' @description Calling JAGS to implement BAC for binary responses
#' @param yh,nh Vector of the numbers of events (subjects) in the historical
#' trial(s). Must be of equal length.
#' @param n1,n2 Number of subjects in the control or treatment arm of the current
#' trial.
#' @param y1.range,y2.range Number of events in control or treatment arm of the
#' current trial. See "Details".
#' @param n.chain Controls the number of posterior samples. Each chain contains
#' 20,000 samples.
#' @param tau.alpha,tau.beta Hyperparameters of the inverse gamma distribution
#' controling the extent of borrowing.
#' @param prior.type Type of prior on control groups. Currenly, only the
#' inverse-gamma prior is implemented.
#' @param criterion.type Type of posterior quantities to be monitored. See
#' "Details."
#' @param prob.threshold For \code{criterion.type="prob"} only. See "Details".
#' @param sim.mode Simulation duration reduces greatly in \code{"express"}
#' mode, if treatment and control arms are independent. See "Details".
#' @details There are two types of posterior quantities for
#' \code{criterion.type} argument. With \code{"diff"} option, the quantity
#' computed is \eqn{p_{T} - p_{C}}; with \code{"prob,"} such quantity is
#' \eqn{pr(p_{T} - p_{C}>\Delta)}, where \eqn{\Delta} is specified by
#' \code{prob.threshold} argument.
#'
#' By default, \code{y1.range} and \code{y2.range} cover all possible outcomes
#' and should be left unspecified in most cases. However, when \code{n1}
#' and/or \code{n2} is fairly large, it is acceptable to use a reduced range
#' that covers the outcomes that are most likely (e.g., within 95\% CI) to be
#' observed. This may help shorten the time to run MCMC.
#'
#' Another way that can greatly shorten the MCMC running time is to specify
#' \code{"express"} mode in \code{sim.mode} argument. Express mode reduces the
#' number of simulations from \code{length(y1.range)*length(y2.range)} to
#' \code{length(y1.range)+length(y2.range)}. Express mode is proper when the
#' treatment arm rate is independent of control arm rate.
#' @return An object of class "BAC".
#' @examples
#' \dontrun{
#' library(BACCT)
#' #borrow from 3 historical trials#
#' yh = c(11,300,52);nh = c(45,877,128)
#' #specify current trial sample sizes#
#' n1 = 20;n2 = 30
#'
#' #Difference criterion type in full simulation mode#
#' obj1 = BAC_binom(yh=yh,nh=nh,n1=n1,n2=n2,n.chain=5,
#' criterion.type="diff",sim.mode="full")
#'
#' #Probability criterion type in express simulation mode#
#' obj2 = BAC_binom(yh=yh,nh=nh,n1=n1,n2=n2,n.chain=5,
#' criterion.type="prob",prob.threshold=0.1,sim.mode="express")
#'
#' #S3 method for class "BAC"
#' summary(obj1)
#' }
#' @author Hongtao Zhang
#' @import rjags
#' @importFrom stats update
#' @export
BAC_binom = function(yh,nh,n1,n2,y1.range = 0:n1,y2.range = 0:n2,n.chain =
5,tau.alpha = 0.001, tau.beta = 0.001, prior.type =
"nonmixture",criterion.type = c("diff","prob"),prob.threshold,sim.mode =
c("full","express"))
{
posterior.mat = matrix(NA,length(y1.range),length(y2.range))
cat(JAGS.binom.nonmix,file = "JAGS.model.txt",sep = "\n")
if (criterion.type == "diff")
{
if (sim.mode == "full")
{
for (i in 1:length(y1.range))
{
for (j in 1:length(y2.range))
{
jags.data = list(
'yh' = yh, 'nh' = nh,'y1' = y1.range[i], 'n1' = n1,'y2' = y2.range[j],'n2' = n2, 'alpha' =
tau.alpha,'beta' = tau.beta
)
jags.obj = jags.model(
"JAGS.model.txt",data = jags.data,n.chains = n.chain,n.adapt = 5000,quiet =
T
)
update(jags.obj,5000,progress.bar = "none")
tt = coda.samples(
jags.obj,variable.names = "delta",n.iter = 20000,thin = 1,progress.bar =
"none"
)
mcmc.sample = unlist(tt)
tt1 = mean(mcmc.sample)
posterior.mat[i,j] = tt1
progress.info = paste("y1=",y1.range[i],";y2=",y2.range[j],sep =
"")
print(progress.info)
}
}
}
if (sim.mode == "express")
{
p1 = NULL
for (i in 1:length(y1.range))
{
jags.data = list(
'yh' = yh, 'nh' = nh,'y1' = y1.range[i], 'n1' = n1,'y2' = y2.range[1],'n2' = n2, 'alpha' =
tau.alpha,'beta' = tau.beta
)
jags.obj = jags.model(
"JAGS.model.txt",data = jags.data,n.chains = n.chain,n.adapt = 5000,quiet =
T
)
update(jags.obj,5000,progress.bar = "none")
tt = coda.samples(
jags.obj,variable.names = "p1",n.iter = 20000,thin = 1,progress.bar =
"none"
)
mcmc.sample = unlist(tt)
p1 = cbind(p1,mcmc.sample)
progress.info = paste("y1=",y1.range[i],sep =
"")
print(progress.info)
}
p2 = NULL
for (j in 1:length(y2.range))
{
jags.data = list(
'yh' = yh, 'nh' = nh,'y1' = y1.range[1], 'n1' = n1,'y2' = y2.range[j],'n2' = n2,'alpha' =
tau.alpha,'beta' = tau.beta
)
jags.obj = jags.model(
"JAGS.model.txt",data = jags.data,n.chains = n.chain,n.adapt = 5000,quiet =
T
)
update(jags.obj,5000,progress.bar = "none")
tt = coda.samples(
jags.obj,variable.names = "p2",n.iter = 20000,thin = 1,progress.bar =
"none"
)
mcmc.sample = unlist(tt)
p2 = cbind(p2,mcmc.sample)
progress.info = paste("y2=",y2.range[j],sep =
"")
print(progress.info)
}
tt1 = colMeans(p1) %o% rep(1,length(y2.range))
tt2 = rep(1,length(y1.range)) %o% colMeans(p2)
posterior.mat = tt2 - tt1
}
rownames(posterior.mat) = y1.range
colnames(posterior.mat) = y2.range
file.remove("JAGS.model.txt")
rlist = list(
response = "Binary",posterior.mat = posterior.mat,y1.range = y1.range,y2.range =
y2.range,n1 = n1,n2 = n2,yh = yh,nh = nh,nsample = 20000 * n.chain,criterion.type =
criterion.type, sim.mode = sim.mode, tau.alpha = tau.alpha, tau.beta = tau.beta
)
class(rlist) = "BAC"
return(rlist)
}
if (criterion.type == "prob")
{
if (sim.mode == "full")
{
for (i in 1:length(y1.range))
{
for (j in 1:length(y2.range))
{
jags.data = list(
'yh' = yh, 'nh' = nh,'y1' = y1.range[i], 'n1' = n1,'y2' = y2.range[j],'n2' = n2, 'alpha' =
tau.alpha,'beta' = tau.beta
)
jags.obj = jags.model(
"JAGS.model.txt",data = jags.data,n.chains = n.chain,n.adapt = 5000,quiet =
T
)
update(jags.obj,5000,progress.bar = "none")
tt = coda.samples(
jags.obj,variable.names = "delta",n.iter = 20000,thin = 1,progress.bar =
"none"
)
mcmc.sample = unlist(tt)
tt1 = mean(mcmc.sample > prob.threshold)
posterior.mat[i,j] = tt1
progress.info = paste("y1=",y1.range[i],";y2=",y2.range[j],sep =
"")
print(progress.info)
}
}
}
if (sim.mode == "express")
{
p1 = NULL
for (i in 1:length(y1.range))
{
jags.data = list(
'yh' = yh, 'nh' = nh,'y1' = y1.range[i], 'n1' = n1,'y2' = y2.range[1],'n2' = n2, 'alpha' =
tau.alpha,'beta' = tau.beta
)
jags.obj = jags.model(
"JAGS.model.txt",data = jags.data,n.chains = n.chain,n.adapt = 5000,quiet =
T
)
update(jags.obj,5000,progress.bar = "none")
tt = coda.samples(
jags.obj,variable.names = "p1",n.iter = 20000,thin = 1,progress.bar =
"none"
)
mcmc.sample = unlist(tt)
p1 = cbind(p1,mcmc.sample)
progress.info = paste("y1=",y1.range[i],sep =
"")
print(progress.info)
}
p2 = NULL
for (j in 1:length(y2.range))
{
jags.data = list(
'yh' = yh, 'nh' = nh,'y1' = y1.range[1], 'n1' = n1,'y2' = y2.range[j],'n2' = n2,'alpha' =
tau.alpha,'beta' = tau.beta
)
jags.obj = jags.model(
"JAGS.model.txt",data = jags.data,n.chains = n.chain,n.adapt = 5000,quiet =
T
)
update(jags.obj,5000,progress.bar = "none")
tt = coda.samples(
jags.obj,variable.names = "p2",n.iter = 20000,thin = 1,progress.bar =
"none"
)
mcmc.sample = unlist(tt)
p2 = cbind(p2,mcmc.sample)
progress.info = paste("y2=",y2.range[j],sep =
"")
print(progress.info)
}
for (i in 1:length(y1.range))
{
for (j in 1:length(y2.range))
{
tt1 = p2[,j] - p1[,i]
posterior.mat[i,j] = mean(tt1 > prob.threshold)
}
}
}
rownames(posterior.mat) = y1.range
colnames(posterior.mat) = y2.range
file.remove("JAGS.model.txt")
rlist = list(
response = "Binary",posterior.mat = posterior.mat,y1.range = y1.range,y2.range =
y2.range,n1 = n1,n2 = n2,yh = yh,nh = nh,nsample = 20000 * n.chain,criterion.type =
criterion.type, sim.mode = sim.mode, tau.alpha = tau.alpha, tau.beta = tau.beta
)
class(rlist) = "BAC"
return(rlist)
}
}
| /scratch/gouwar.j/cran-all/cranData/BACCT/R/BAC_binom.R |
#'@title Evaluating a Decision Rule
#'@description Applies a decision rule to a "BAC" class object and provides rule
#' evaluation
#'@param object An object of class "BAC".
#'@param decision.rule A vector of \code{c(a,b)} specifying the thresholds for
#' claiming significance (or probabilities of making correct go/no-go decisions
#' at interim look). See "Details".
#'@param control.range A vector of control rates at which the decision rule is
#' evaluated.
#'@param es A vector of treatment arm effect sizes, compared to control arm.
#'@param csv.name If a name is specified, the output data set is exported in
#' CSV format.
#'@details The decision rules specified in \code{c(a,b)} may be in the context
#' of either interim or final analysis. At the interim, a "go" decision is made
#' if the criterion in the "BAC" object exceeds \code{b} and a "no go" decision
#' if such criterion is below \code{a}. Otherwise, the decision falls in the
#' gray zone.
#'
#' For the final analysis, the decision rule should satisfy \code{a}=\code{b}.
#' Significance is claimed if the criterion in the "BAC" object exceeds
#' \code{a}. Specifying an \code{a} larger than \code{b} will lead to an error.
#'
#' For interim analysis, specified decision rule is evaluated by the
#' probability of making a correct go or no go decision. For final analysis,
#' power or type-I error is computed.
#'
#' Negative \code{es} values are allowed if a lower rate is desirable.
#'@return An object of class "BACdecision".
#'@author Hongtao Zhang
#' @examples
#' \dontrun{
#' #borrow from 3 historical trials#
#' yh = c(11,300,52);nh = c(45,877,128)
#' #specify current trial sample sizes#
#' n1 = 20;n2 = 30
#' obj = BAC_binom(yh=yh,nh=nh,n1=n1,n2=n2,n.chain=5,
#' criterion.type="prob",prob.threshold=0.1,sim.mode="express")
#'
#' rule = decision_eval(obj,decision.rule=c(0.05,0.15),
#' control.range=seq(0.3,0.5,0.01),es=c(0,0.1,0.2),csv.name="result.csv")
#'
#' #S3 method for class "BACdecision"
#' plot(rule,interim=T)
#' }
#' @export
#' @importFrom stats dbinom
#' @importFrom utils write.csv
decision_eval = function(object,decision.rule,control.range,es,csv.name =
NULL)
{
if (object$response == "Binary")
{
n1 = object$n1;n2 = object$n2
y1.range = object$y1.range;y2.range = object$y2.range
nogo.cut = decision.rule[1];go.cut = decision.rule[2]
#if it makes no sense...
if (nogo.cut > go.cut)
{
stop("Invalid decision rule: ")
}
else
{
outdt = data.frame()
for (k in 1:length(es))
{
tt = matrix(nogo.cut,length(y1.range),length(y2.range))
nogo.decision.mat = -1 * (object$posterior.mat < tt)
tt = matrix(go.cut,length(y1.range),length(y2.range))
go.decision.mat = 1 * (object$posterior.mat > tt)
nogo.probvec = go.probvec = rep(0,length(control.range))
for (i in 1:length(control.range))
{
p1 = control.range[i]
p2 = p1 + es[k]
tt1 = dbinom(y1.range,n1,p1)
tt2 = dbinom(y2.range,n2,p2)
jointmass.mat = tt1 %o% tt2
nogo.probvec[i] = sum(-nogo.decision.mat * jointmass.mat)
go.probvec[i] = sum(go.decision.mat * jointmass.mat)
}
#generate (part of) output dataset
ttt = data.frame(control.range,es[k],nogo.probvec,go.probvec)
outdt = rbind(outdt,ttt)
}
names(outdt) = c("control.rate","ES","nogo.prob","go.prob")
rlist = list(
response = object$response,decision.rule = decision.rule,control.range =
control.range,data = outdt
)
class(rlist) = "BACdecision"
if (!is.null(csv.name))
{
write.csv(outdt,csv.name,row.names = F)
}
return(rlist)
}
}
if (object$response == "Continuous")
{
cat("needs work...")
}
}
| /scratch/gouwar.j/cran-all/cranData/BACCT/R/decision_eval.R |
#' @title Heatmap for Decision Rules
#' @description Visualizing a decision rule for binary endpoint using heatmap
#' plots
#' @param object An object of "BAC" class.
#' @param decision.rule A vector of \code{c(a,b)} specifying the decision rule.
#' See help for \code{decision_eval} function.
#' @param y1.display,y2.display A subset of control/treatment number of events
#' to be displayed.
#' @examples
#' \dontrun{
#' #borrow from 3 historical trials#
#' yh = c(11,300,52);nh = c(45,877,128)
#' #specify current trial sample sizes#
#' n1 = 20;n2 = 30
#' obj = BAC_binom(yh=yh,nh=nh,n1=n1,n2=n2,n.chain=5,
#' criterion.type="prob",prob.threshold=0.1,sim.mode="express")
#'
#' #generate full heatmap
#' heatmap_decision(obj,decision.rule=c(0.05,0.15))
#' #generate partial heatmap
#' heatmap_decision(obj,decision.rule=c(0.05,0.15),y1.display=5:15,y2.display=10:25)
#'
#' }
#' @author Hongtao Zhang
#' @import ggplot2
#' @import reshape2
#' @export
heatmap_decision=function(object,decision.rule,y1.display=NA,y2.display=NA)
{
n1 = object$n1;n2 = object$n2
y1.range = object$y1.range;y2.range = object$y2.range
nogo.cut = decision.rule[1];go.cut = decision.rule[2]
tt = matrix(nogo.cut,length(y1.range),length(y2.range))
nogo.decision.mat = -1 * (object$posterior.mat < tt)
tt = matrix(go.cut,length(y1.range),length(y2.range))
go.decision.mat = 1 * (object$posterior.mat >= tt)
decision.mat = nogo.decision.mat + go.decision.mat
if(all(is.na(y1.display))){y1.display = y1.range}
if(all(is.na(y2.display))){y2.display = y2.range}
decision.long = reshape2::melt(decision.mat[y1.display+1,y2.display+1])
decision.long$valuecat = factor(decision.long$value)
if(decision.rule[1]==decision.rule[2])
{
cat("needs work")
}
if(decision.rule[1] < decision.rule[2])
{
#Basic elements
ggplot(decision.long,aes_string(x="Var2",y="Var1",fill="valuecat")) + geom_tile() + geom_tile(color="black",linetype="longdash",show.legend=F) +
#Remove gray background
theme_bw() +
#Axes
scale_x_continuous(breaks=y2.display,name="Treatment") + scale_y_continuous(breaks=y1.display,name="Control") +
#Legend
scale_fill_manual(breaks=c("-1","0","1"),labels=c("No Go","Gray","Go"),values=c("red","gray","darkgreen"),name="Decision") +
#Fine tunes
theme(axis.title.x = element_text(face="bold",size=15),
axis.text.x = element_text(size=12,face="bold"),
axis.title.y = element_text(face="bold",size=15),
axis.text.y = element_text(size=12,face="bold"),
panel.grid.major=element_blank(),
panel.grid.minor=element_blank(),
panel.border=element_blank()
)
}
}
| /scratch/gouwar.j/cran-all/cranData/BACCT/R/heatmap_decision.R |
#' @title Generateing Plot(s) Used for Decision Rule Evaluation
#' @description \code{plot} method for class "BACdecision"
#' @param x An object of "BACdecision" class.
#' @param es.null Effect size under the null hypothesis. Default is 0.
#' @param es.null.side "=" is the only option now.
#' @param interim Logical indicator of interim analysis (versus final
#' analysis). Figures will differ in legends and titles.
#' @param ... Argument to be passed to or from other methods
#' @details If \code{interim=F}, only one power/type I error figure will be
#' generated. Otherwise, two figures will be generated correponding to "No Go"
#' and "Go" decisions respectively.
#' @return An object of "ggplot" class. Certain further edits are still allowed,
#' such as changing title and adding reference lines.
#' @author Hongtao Zhang
#' @import ggplot2
#' @export
plot.BACdecision=function(x,es.null=0,es.null.side,interim=F,...)
{
evaldata = x$data
evaldata$EScat = factor(evaldata$ES)
if(interim)
{
evaldata$nogo.status = ifelse(evaldata$ES==es.null,"Correct","Incorrect")
evaldata$go.status = ifelse(evaldata$ES==es.null,"Incorrect","Correct")
}
if(!interim)
{
evaldata$nogo.status = ifelse(evaldata$ES==es.null,"Power","Type I error")
evaldata$go.status = ifelse(evaldata$ES==es.null,"Type I error","Power")
}
if(x$response=="Binary")
{
nogo.plot =
#Basic elements
ggplot(evaldata,aes_string(x="control.rate",y="nogo.prob",group="EScat",linetype="EScat",color="nogo.status")) + geom_line(size=1.5) +
#Axes
scale_x_continuous(breaks=x$control.range,name="Control Rate") + scale_y_continuous(limits=c(0,1),breaks=seq(0,1,0.05),name="Probability") +
#Legend box names
scale_linetype_discrete(name="Effect Size") +
#Fine tunes of title fonts, legend fonts
theme(legend.title=element_text(size=15,face="bold"),
legend.text=element_text(size=12),
legend.key.size=unit(2,"cm"),
plot.title = element_text(size=20,face="bold"),
axis.title.x = element_text(face="bold",size=15),
axis.text.x = element_text(angle=90,vjust=0.5, size=12,face="bold"),
axis.title.y = element_text(face="bold",size=15),
axis.text.y = element_text(size=12,face="bold")
)
if(interim)
{
nogo.plot = nogo.plot +
ggtitle("No Go Decision Evaluation") +
scale_colour_manual(name="Decision",values=c("Correct"="green4","Incorrect"="red"))
}
if(!interim)
{
nogo.plot = nogo.plot +
ggtitle("Type I Error and Power") +
scale_colour_manual(name=" ",values=c("Power"="green4","Type I error"="red"))
}
go.plot =
#Basic elements
ggplot(evaldata,aes_string(x="control.rate",y="go.prob",group="EScat",linetype="EScat",color="go.status")) + geom_line(size=1.5) +
#Axes
scale_x_continuous(breaks=x$control.range,name="Control Rate") + scale_y_continuous(limits=c(0,1),breaks=seq(0,1,0.05),name="Probability") +
#Legend box names
scale_linetype_discrete(name="Effect Size") +
#Fine tunes of title fonts, legend fonts
theme(legend.title=element_text(size=15,face="bold"),
legend.text=element_text(size=12),
legend.key.size=unit(2,"cm"),
plot.title = element_text(size=20,face="bold"),
axis.title.x = element_text(face="bold",size=15),
axis.text.x = element_text(angle=90,vjust=0.5, size=12,face="bold"),
axis.title.y = element_text(face="bold",size=15),
axis.text.y = element_text(size=12,face="bold")
)
if(interim)
{
go.plot = go.plot +
ggtitle("Go Decision Evaluation") +
scale_colour_manual(name="Decision",values=c("Correct"="green4","Incorrect"="red"))
rlist = list(nogo.plot,go.plot)
}
if(!interim)
{
go.plot = go.plot +
ggtitle("Type I Error and Power") +
scale_colour_manual(name=" ",values=c("Power"="green4","Type I error"="red"))
rlist = list(go.plot)
}
return(rlist)
}
}
| /scratch/gouwar.j/cran-all/cranData/BACCT/R/plot.BACdecision.R |
#' @title Summarizing BAC
#' @description \code{summary} method for class "BAC"
#' @param object An object of class "BAC"
#' @param ... Argument to be passed to or from other methods
#' @author Hongtao Zhang
#' @export
#'
summary.BAC=function(object,...)
{
cat("\n****************************")
cat("\nDATA")
cat("\n****************************")
cat("\nHistorical control data:\n")
cat(paste(object$yh,collapse=","),"out of",paste(object$nh,collapse=","))
y1.min = min(object$y1.range);y1.max = max(object$y1.range)
y2.min = min(object$y2.range);y2.max = max(object$y2.range)
cat("\nRange of control arm in current trial:\n")
cat(paste(y1.min," to ",y1.max," out of ",object$n1,sep=""))
cat("\nRange of treatment arm in current trial:\n")
cat(paste(y2.min," to ",y2.max," out of ",object$n2,sep=""))
cat("\n****************************")
cat("\nMCMC PARAMETERS")
cat("\n****************************")
cat("\nPosterior quantity monitored:\n")
cat(object$criterion.type)
cat("\n# of posterior samples:\n")
cat(object$nsample)
}
| /scratch/gouwar.j/cran-all/cranData/BACCT/R/summary.BAC.R |
BACprior.CV<-function (Y, X, U, omega = c(1, 1.1, 1.3, 1.6, 2, 5, 10, 30, 50, 100, Inf), maxmodels = 150, cutoff = 1e-04, V = 100, criterion = "CVm")
{
na.fail(cbind(Y, X, U));
n = length(X);
n2 = floor(n/2);
Criterion = numeric(length(omega));
sample0 = matrix(0, nrow = V, ncol = (n - n2));
sample1 = matrix(0, nrow = V, ncol = n2);
Beta0 = numeric(V);
betas1 = matrix(0, nrow = V, ncol = length(omega));
for (v in 1:V)
{
samp1 = sample(1:n, n2);
samp0 = (1:n)[-samp1];
sample0[v,] = samp0;
sample1[v,] = samp1;
X0 = X[sample0[v,]]
U0 = U[sample0[v,], ];
Y0 = Y[sample0[v,]];
Beta0[v] = BACprior.lm(Y0, X0, U0, omega = Inf, maxmodels = maxmodels, cutoff = cutoff)$results[2];
X1 = X[sample1[v,]];
U1 = U[sample1[v,], ];
Y1 = Y[sample1[v,]];
betas1[v,] = BACprior.lm(Y1, X1, U1, omega = omega, maxmodels = maxmodels, cutoff = cutoff)$results[, 2];
}
if(criterion == "CVm"){beta0 = mean(Beta0)};
if(criterion == "CV"){beta0 = Beta0};
Criterion.Value = colSums((betas1 - beta0)^2/V);
best = omega[which.min(sqrt(Criterion))];
plot(omega, Criterion.Value, type = "b", main = "Criterion value according to omega value",
sub = bquote("Best omega = " ~ .(best)), ylab = criterion)
abline(v = best, col = "red", lty = 2)
return(list(best = best, Criterion = Criterion.Value))
}
| /scratch/gouwar.j/cran-all/cranData/BACprior/R/BACprior.CV.R |
BACprior.boot<-function (Y, X, U, omega = c(1, 1.1, 1.3, 1.6, 2, 5, 10, 30, 50, 100, Inf), maxmodels = 150, cutoff = 1e-04, B = 100)
{
na.fail(cbind(Y, X, U));
MSE = numeric(length(omega));
beta0 = BACprior.lm(Y, X, U, omega = Inf, maxmodels = maxmodels, cutoff = cutoff)$results[2];
dat = cbind(Y,X,U);
betas1f = function(dat, i)
{
Y = dat[i,1];
X = dat[i,2];
U = dat[i, -c(1,2)];
BACprior.lm(Y, X, U, omega = omega, maxmodels = maxmodels, cutoff = cutoff)$results[, 2];
}
betas1 = boot(data = dat, statistic = betas1f, R = B)$t;
MSE = colSums((betas1 - beta0)^2/B);
best = omega[which.min(sqrt(MSE))];
plot(omega, sqrt(MSE), type = "b", main = "Estimated sqrt-MSE according to omega value",
sub = bquote("Best omega = " ~ .(best)))
abline(v = best, col = "red", lty = 2)
return(list(best = best, MSE = MSE))
} | /scratch/gouwar.j/cran-all/cranData/BACprior/R/BACprior.boot.R |
BACprior.lm = function(Y, X, U, omega = c(1, 1.1, 1.3, 1.6, 2, 5, 10, 30, 50, 100, Inf), maxmodels = 150, cutoff = 0.0001, return.best = FALSE)
{
na.fail(cbind(Y, X, U));
n = length(X);
ncov = ncol(U);
resultsX = summary(regsubsets(y = X, x = U, nbest = maxmodels, really.big = T, nvmax = ncov));
MLx = exp(-resultsX$bic/2 + min(resultsX$bic)/2);
MLx = MLx/sum(MLx);
modelsX = resultsX$which[,-1]; #The null model is not considered
resultsYa = regsubsets(y = Y, x = cbind(X,U), force.in = 1, nbest = maxmodels, really.big = T, nvmax = ncov + 1);
resultsY = summary(resultsYa);
MLy = exp(-resultsY$bic/2 + min(resultsY$bic)/2);
MLy = MLy/sum(MLy);
modelsY = resultsY$which[,-c(1,2)]; #The null model is not considered
inX_notinY = matrix(apply(modelsY, 1, function(x){apply(modelsX, 1, function(y){sum(y > x)})}), ncol = nrow(modelsX), byrow = T);
#each column indicate the number of covariates in the different X models that are not in the given Y model
Nothers = ncov - inX_notinY;
l = length(omega);
MF = matrix(0, nrow = nrow(modelsY), ncol = l);
for(i in 1:l)
{
if(omega[i] == Inf)
{
a = 1/3;
b = 0;
}
else
{
a = omega[i]/(3*omega[i]+1);
b = 1/(3*omega[i]+1);
}
MF[,i] = (a**Nothers*b**inX_notinY)%*%MLx;
}
unnormprob = apply(MF, 2, "*", MLy);
normprob = apply(unnormprob, 2, function(x){x/sum(x)});
normprob = normprob*(normprob>cutoff);
usedmodels = which(rowSums(normprob) > 0);
usedmodels;
est = coef(resultsYa, id = usedmodels);
VarEst = vcov(resultsYa, id = usedmodels);
betas = numeric(nrow(modelsY));
variances = numeric(nrow(modelsY));
if(length(usedmodels) == 1)
{
betas[usedmodels] = est[2];
variances[usedmodels] = VarEst[2,2];
}
else
{
k = 1;
for(i in usedmodels)
{
betas[i] = est[[k]][2];
variances[i] = VarEst[[k]][2,2];
k = k + 1;
}
}
beta = apply(normprob, 2, function(x){sum(x*betas)});
variance1 = apply(normprob, 2, function(x){sum(x*(variances + betas**2))});
variance2 = variance1 - beta**2;
results = cbind(omega, beta, sqrt(variance2));
rownames(results) = NULL;
colnames(results) = c("omega", "Posterior mean", "Standard deviation");
if(return.best == FALSE)
{
return(list(results = results));
}
else
{
colnames(modelsY) = colnames(U);
colnames(normprob) = omega;
return(list(results = results, best.models = modelsY[usedmodels,], posterior.prob = normprob[usedmodels,]));
}
} | /scratch/gouwar.j/cran-all/cranData/BACprior/R/BACprior.lm.R |
#' @export
################################################################
#The function to calculate the emperical p-value of multiple-split
# results from simulated dataset
# If the number of partition variables is larger than Kmax,
# apply a random forest to preselect Kmax partition variables
################################################################
# g: number of groups from the partition
# nsplits: number of splits applied in the multiple splitting
# ne: test set size
# nsim: number of simulated data sets to calculate emperical p-value
BAGofT <- function(testModel, parFun = parRF(),
data, nsplits = 100, ne = floor(5*nrow(data)^(1/2)), nsim = 100){
testRes <- BAGofT_multi(testModel = testModel, parFun = parFun,
data = data, nsplits = nsplits, ne = ne)
if (nsim >= 1){
# simulate data
pmeansimVec <- numeric(nsim)
pmediansimVec <- numeric(nsim)
pminsimVec <- numeric(nsim)
message("Generating simulated data for empirical p-value")
# fit the model to test by training data
# we do no need the output for the test set prediction
# take a dataset with a single row with all 0s as input
dataFittemp <- data
dataFittemp[1, ]<- 0
# fit model on the dataset
modFit <- testModel(Train.data = data, Validation.data = dataFittemp)
# probability calculated from fitted coefficients
pdat2 <- modFit$predT
for (i in c(1:nsim)){
# process bar
message(paste("Calculating results from ", i, "th simulated dataset"))
# randomly generated data from the fitted probabilities
ydat2 <- sapply(pdat2, function(x) stats::rbinom(1, 1, x))
dat2 <- data
Rsp <- modFit$Rsp
dat2[,Rsp] <- ydat2
# mean statistic calculated from the simulated data.
testRes_sim <- BAGofT_multi(testModel = testModel, parFun = parFun,
data = dat2, nsplits = nsplits, ne = ne)
pmeansimVec[i] <- testRes_sim$meanPv
pmediansimVec[i] <- testRes_sim$medianPv
pminsimVec[i] <- testRes_sim$minPv
}
# calculate empirical p-value from simulated data
pvalue <- mean(testRes$meanPv > pmeansimVec)
pvalue2 <- mean(testRes$medianPv > pmediansimVec)
pvalue3 <- mean(testRes$minPv > pminsimVec)
message(paste("p-value: ", pvalue,
"Averaged statistic value: ", testRes$meanPv))
return(invisible( list(p.value = pvalue,
p.value2 = pvalue2,
p.value3 = pvalue3,
pmean = testRes$meanPv,
pmedian = testRes$medianPv,
pmin = testRes$minPv,
simRes = list(pmeanSim = pmeansimVec,
pmediansim = pmediansimVec,
pminsim = pminsimVec),
singleSplit.results = testRes$spliDat ) ))
}else{
message(paste("Averaged statistic value: ", testRes$meanPv))
return( invisible( list(
pmean = testRes$meanPv,
pmedian = testRes$medianPv,
pmin = testRes$minPv,
singleSplit.results = testRes$spliDat ) ))
}
}
########################################################################################
########################################################################################
| /scratch/gouwar.j/cran-all/cranData/BAGofT/R/BAGofT.R |
################################################################
#The function to calculate the mean of chi-square statistics
# from multiple splits.
################################################################
# nsplits: number of splits
BAGofT_multi <- function(testModel, parFun,
data, nsplits, ne){
spliDat <- list()
for (j in c(1:nsplits)){
spliDat[[j]] <- BAGofT_sin(testModel = testModel,
parFun = parFun,
datset = data,
ne = ne)
}
# pvalues from multiple splits
pvdat <- unlist(lapply(c(1:nsplits), function(x) spliDat[[x]]$p.value))
return( list( meanPv = mean(pvdat),
medianPv = stats::median(pvdat),
minPv = min(pvdat),
spliDat = spliDat ) )
}
| /scratch/gouwar.j/cran-all/cranData/BAGofT/R/BAGofT_multi.R |
################################################################
#The function to calculate the chi-square statistic
# value from a single split BAGofT
# previous version fits residual by random forest,
# current version fits Pearson residual
################################################################
BAGofT_sin <- function(testModel, parFun,
datset, ne){
# number of minimum observations in a group
# nmin <- ceiling(sqrt(ne))
# number of rows in the dataset
nr <- nrow(datset)
# obtain the training set size
nt <- nr - ne
# the indices for training set observations
trainIn <- sample(c(1 : nr), nt)
#split the data
datT <- datset[trainIn, ]
datE <- datset[-trainIn, ]
# fit the model to test by training data
testMod <- testModel(Train.data = datT, Validation.data = datE)
# obtain adaptive partition result from parFun
par <- parFun(Rsp = testMod$Rsp, predT = testMod$predT, res = testMod$res,
Train.data = datT, Validation.data = datE)
#calculate the number of groups left
ngp <- length(levels(par$gup))
#########calculate the difference in each group
dif <- abs(stats :: xtabs(testMod$predE - datE[,testMod$Rsp] ~ par$gup))
#calculate the denominator in each group
den <- stats :: xtabs(testMod$predE * (1 - testMod$predE) ~ par$gup)
#########calculate the chisquare sum
contri <- (dif)^2/den
chisq <- sum(contri)
#calculate test statistic (p value).
P = 1 - stats :: pchisq(chisq, ngp)
#pass values to list gls
gls <- list(chisq = chisq, p.value = P, ngp = ngp, contri = contri, parRes = par$parRes)
return(gls)
}
########################################################################################
########################################################################################
| /scratch/gouwar.j/cran-all/cranData/BAGofT/R/BAGofT_sin.R |
#' @export
# calculates variable importance from test output
VarImp <- function(TestRes){
viList <- TestRes$singleSplit.results
Var.imp <- apply(simplify2array(lapply(c(1:length( viList)), function(x){viList[[x]]$parRes$Var.imp})), c(1,2), mean)
if ("preVar.imp" %in% names(TestRes$singleSplit.results[[1]]$parRes) ) {
preVar.imp <- apply(simplify2array(lapply(c(1:length( viList)), function(x){viList[[x]]$parRes$preVar.imp})), c(1,2), mean)
return(list(Var.imp = Var.imp, preVar.imp = preVar.imp))
}else{
return(list(Var.imp = Var.imp))
}
}
| /scratch/gouwar.j/cran-all/cranData/BAGofT/R/VarImp.R |
# preselection by distance correlation
dcPre <- function( npreSel = 5,
type = "V"){
return(function(datRf, parVar){
# count the number of variables to partition
nParV <- if(!identical(parVar, ".")){length(parVar)}else{ncol(datRf) - 1}
if (nParV > npreSel){
preSelected <- TRUE
datRf_temp <- datRf
datRf_temp$res <- NULL
# calculate the distance correlation
dcRes <- t(dcov :: mdcor(datRf$res, datRf_temp))
rownames(dcRes) <- colnames(datRf_temp)
# select Kmax partition variables with the largest variable importance
parVarNew <- colnames(datRf_temp)[order(-as.numeric(dcRes))[c(1: npreSel)]]
return(list(preSelected = preSelected, parVarNew = parVarNew, VI = dcRes))
}else{
preSelected = FALSE
parVarNew <- parVar
return(list(preSelected = preSelected, parVarNew = parVarNew))
}
})
} | /scratch/gouwar.j/cran-all/cranData/BAGofT/R/dcPre.R |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.