1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1beta2
  6. getWorkflowTemplate

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1beta2.getWorkflowTemplate

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Retrieves the latest workflow template.Can retrieve previously instantiated template by specifying optional version parameter.

Using getWorkflowTemplate

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getWorkflowTemplate(args: GetWorkflowTemplateArgs, opts?: InvokeOptions): Promise<GetWorkflowTemplateResult>
function getWorkflowTemplateOutput(args: GetWorkflowTemplateOutputArgs, opts?: InvokeOptions): Output<GetWorkflowTemplateResult>
Copy
def get_workflow_template(location: Optional[str] = None,
                          project: Optional[str] = None,
                          version: Optional[int] = None,
                          workflow_template_id: Optional[str] = None,
                          opts: Optional[InvokeOptions] = None) -> GetWorkflowTemplateResult
def get_workflow_template_output(location: Optional[pulumi.Input[str]] = None,
                          project: Optional[pulumi.Input[str]] = None,
                          version: Optional[pulumi.Input[int]] = None,
                          workflow_template_id: Optional[pulumi.Input[str]] = None,
                          opts: Optional[InvokeOptions] = None) -> Output[GetWorkflowTemplateResult]
Copy
func LookupWorkflowTemplate(ctx *Context, args *LookupWorkflowTemplateArgs, opts ...InvokeOption) (*LookupWorkflowTemplateResult, error)
func LookupWorkflowTemplateOutput(ctx *Context, args *LookupWorkflowTemplateOutputArgs, opts ...InvokeOption) LookupWorkflowTemplateResultOutput
Copy

> Note: This function is named LookupWorkflowTemplate in the Go SDK.

public static class GetWorkflowTemplate 
{
    public static Task<GetWorkflowTemplateResult> InvokeAsync(GetWorkflowTemplateArgs args, InvokeOptions? opts = null)
    public static Output<GetWorkflowTemplateResult> Invoke(GetWorkflowTemplateInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetWorkflowTemplateResult> getWorkflowTemplate(GetWorkflowTemplateArgs args, InvokeOptions options)
public static Output<GetWorkflowTemplateResult> getWorkflowTemplate(GetWorkflowTemplateArgs args, InvokeOptions options)
Copy
fn::invoke:
  function: google-native:dataproc/v1beta2:getWorkflowTemplate
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

Location This property is required. string
WorkflowTemplateId This property is required. string
Project string
Version int
Location This property is required. string
WorkflowTemplateId This property is required. string
Project string
Version int
location This property is required. String
workflowTemplateId This property is required. String
project String
version Integer
location This property is required. string
workflowTemplateId This property is required. string
project string
version number
location This property is required. str
workflow_template_id This property is required. str
project str
version int
location This property is required. String
workflowTemplateId This property is required. String
project String
version Number

getWorkflowTemplate Result

The following output properties are available:

CreateTime string
The time template was created.
DagTimeout string
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
Jobs List<Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.OrderedJobResponse>
The Directed Acyclic Graph of Jobs to submit.
Labels Dictionary<string, string>
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
Name string
The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
Parameters List<Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.TemplateParameterResponse>
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
Placement Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.WorkflowTemplatePlacementResponse
WorkflowTemplate scheduling information.
UpdateTime string
The time template was last updated.
Version int
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
CreateTime string
The time template was created.
DagTimeout string
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
Jobs []OrderedJobResponse
The Directed Acyclic Graph of Jobs to submit.
Labels map[string]string
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
Name string
The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
Parameters []TemplateParameterResponse
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
Placement WorkflowTemplatePlacementResponse
WorkflowTemplate scheduling information.
UpdateTime string
The time template was last updated.
Version int
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
createTime String
The time template was created.
dagTimeout String
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
jobs List<OrderedJobResponse>
The Directed Acyclic Graph of Jobs to submit.
labels Map<String,String>
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
name String
The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
parameters List<TemplateParameterResponse>
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
placement WorkflowTemplatePlacementResponse
WorkflowTemplate scheduling information.
updateTime String
The time template was last updated.
version Integer
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
createTime string
The time template was created.
dagTimeout string
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
jobs OrderedJobResponse[]
The Directed Acyclic Graph of Jobs to submit.
labels {[key: string]: string}
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
name string
The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
parameters TemplateParameterResponse[]
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
placement WorkflowTemplatePlacementResponse
WorkflowTemplate scheduling information.
updateTime string
The time template was last updated.
version number
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
create_time str
The time template was created.
dag_timeout str
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
jobs Sequence[OrderedJobResponse]
The Directed Acyclic Graph of Jobs to submit.
labels Mapping[str, str]
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
name str
The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
parameters Sequence[TemplateParameterResponse]
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
placement WorkflowTemplatePlacementResponse
WorkflowTemplate scheduling information.
update_time str
The time template was last updated.
version int
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
createTime String
The time template was created.
dagTimeout String
Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
jobs List<Property Map>
The Directed Acyclic Graph of Jobs to submit.
labels Map<String>
Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
name String
The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
parameters List<Property Map>
Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
placement Property Map
WorkflowTemplate scheduling information.
updateTime String
The time template was last updated.
version Number
Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.

Supporting Types

AcceleratorConfigResponse

AcceleratorCount This property is required. int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorCount This property is required. int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. Integer
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
accelerator_count This property is required. int
The number of the accelerator cards of this type exposed to this instance.
accelerator_type_uri This property is required. str
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. Number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AutoscalingConfigResponse

PolicyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
PolicyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policy_uri This property is required. str
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

ClusterConfigResponse

AutoscalingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
ConfigBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
EncryptionConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EncryptionConfigResponse
Optional. Encryption settings for the cluster.
EndpointConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
GceClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions This property is required. List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeInitializationActionResponse>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
MasterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
MetastoreConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.MetastoreConfigResponse
Optional. Metastore configuration.
SecondaryWorkerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
SecurityConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SecurityConfigResponse
Optional. Security related configuration.
SoftwareConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
TempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
WorkerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
AutoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
ConfigBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
EncryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
EndpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
GceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions This property is required. []NodeInitializationActionResponse
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
MasterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
MetastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
SecondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
SecurityConfig This property is required. SecurityConfigResponse
Optional. Security related configuration.
SoftwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
TempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
WorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. List<NodeInitializationActionResponse>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
masterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig This property is required. SecurityConfigResponse
Optional. Security related configuration.
softwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
tempBucket This property is required. String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. NodeInitializationActionResponse[]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
masterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig This property is required. SecurityConfigResponse
Optional. Security related configuration.
softwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
tempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscaling_config This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
config_bucket This property is required. str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryption_config This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpoint_config This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gce_cluster_config This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gke_cluster_config This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initialization_actions This property is required. Sequence[NodeInitializationActionResponse]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycle_config This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
master_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
metastore_config This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondary_worker_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
security_config This property is required. SecurityConfigResponse
Optional. Security related configuration.
software_config This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
temp_bucket This property is required. str
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
worker_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig This property is required. Property Map
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig This property is required. Property Map
Optional. Encryption settings for the cluster.
endpointConfig This property is required. Property Map
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. Property Map
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. Property Map
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. List<Property Map>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. Property Map
Optional. The config setting for auto delete cluster schedule.
masterConfig This property is required. Property Map
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig This property is required. Property Map
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. Property Map
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig This property is required. Property Map
Optional. Security related configuration.
softwareConfig This property is required. Property Map
Optional. The config settings for software inside the cluster.
tempBucket This property is required. String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig This property is required. Property Map
Optional. The Compute Engine config settings for worker instances in a cluster.

ClusterSelectorResponse

ClusterLabels This property is required. Dictionary<string, string>
The cluster labels. Cluster must have all labels to match.
Zone This property is required. string
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
ClusterLabels This property is required. map[string]string
The cluster labels. Cluster must have all labels to match.
Zone This property is required. string
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
clusterLabels This property is required. Map<String,String>
The cluster labels. Cluster must have all labels to match.
zone This property is required. String
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
clusterLabels This property is required. {[key: string]: string}
The cluster labels. Cluster must have all labels to match.
zone This property is required. string
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
cluster_labels This property is required. Mapping[str, str]
The cluster labels. Cluster must have all labels to match.
zone This property is required. str
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
clusterLabels This property is required. Map<String>
The cluster labels. Cluster must have all labels to match.
zone This property is required. String
Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.

DiskConfigResponse

BootDiskSizeGb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
NumLocalSsds This property is required. int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
BootDiskSizeGb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
NumLocalSsds This property is required. int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb This property is required. Integer
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds This property is required. Integer
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb This property is required. number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds This property is required. number
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
boot_disk_size_gb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
boot_disk_type This property is required. str
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
num_local_ssds This property is required. int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb This property is required. Number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds This property is required. Number
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

EncryptionConfigResponse

GcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
GcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName This property is required. String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gce_pd_kms_key_name This property is required. str
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName This property is required. String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

EndpointConfigResponse

EnableHttpPortAccess This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
HttpPorts This property is required. Dictionary<string, string>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
EnableHttpPortAccess This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
HttpPorts This property is required. map[string]string
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. Map<String,String>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. {[key: string]: string}
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enable_http_port_access This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
http_ports This property is required. Mapping[str, str]
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. Map<String>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

GceClusterConfigResponse

InternalIpOnly This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata This property is required. Dictionary<string, string>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
NodeGroupAffinity This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
ReservationAffinity This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes This property is required. List<string>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
Tags This property is required. List<string>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri This property is required. string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
InternalIpOnly This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata This property is required. map[string]string
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
NodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
ReservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes This property is required. []string
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
Tags This property is required. []string
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri This property is required. string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly This property is required. Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Map<String,String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. String
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. String
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly This property is required. boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. {[key: string]: string}
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. string[]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. string[]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internal_ip_only This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Mapping[str, str]
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
network_uri This property is required. str
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
node_group_affinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
private_ipv6_google_access This property is required. str
Optional. The type of IPv6 access for a cluster.
reservation_affinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
service_account This property is required. str
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
service_account_scopes This property is required. Sequence[str]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shielded_instance_config This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetwork_uri This property is required. str
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. Sequence[str]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zone_uri This property is required. str
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly This property is required. Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Map<String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity This property is required. Property Map
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. String
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. Property Map
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. Property Map
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. String
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

GkeClusterConfigResponse

NamespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespaced_gke_deployment_target This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespacedGkeDeploymentTarget This property is required. Property Map
Optional. A target for the deployment.

HadoopJobResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. List<string>
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
JarFileUris This property is required. List<string>
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
MainClass This property is required. string
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
MainJarFileUri This property is required. string
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
Args This property is required. []string
Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. []string
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
JarFileUris This property is required. []string
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
MainClass This property is required. string
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
MainJarFileUri This property is required. string
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
Properties This property is required. map[string]string
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
jarFileUris This property is required. List<String>
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainClass This property is required. String
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
mainJarFileUri This property is required. String
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
args This property is required. string[]
Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. string[]
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
jarFileUris This property is required. string[]
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainClass This property is required. string
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
mainJarFileUri This property is required. string
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
file_uris This property is required. Sequence[str]
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
jar_file_uris This property is required. Sequence[str]
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
main_class This property is required. str
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
main_jar_file_uri This property is required. str
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
jarFileUris This property is required. List<String>
Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
mainClass This property is required. String
The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
mainJarFileUri This property is required. String
The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
properties This property is required. Map<String>
Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

HiveJobResponse

ContinueOnFailure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
QueryFileUri This property is required. string
The HCFS URI of the script that contains Hive queries.
QueryList This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.QueryListResponse
A list of queries.
ScriptVariables This property is required. Dictionary<string, string>
Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
ContinueOnFailure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
Properties This property is required. map[string]string
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
QueryFileUri This property is required. string
The HCFS URI of the script that contains Hive queries.
QueryList This property is required. QueryListResponse
A list of queries.
ScriptVariables This property is required. map[string]string
Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
continueOnFailure This property is required. Boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
properties This property is required. Map<String,String>
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
queryFileUri This property is required. String
The HCFS URI of the script that contains Hive queries.
queryList This property is required. QueryListResponse
A list of queries.
scriptVariables This property is required. Map<String,String>
Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
continueOnFailure This property is required. boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
queryFileUri This property is required. string
The HCFS URI of the script that contains Hive queries.
queryList This property is required. QueryListResponse
A list of queries.
scriptVariables This property is required. {[key: string]: string}
Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
continue_on_failure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
query_file_uri This property is required. str
The HCFS URI of the script that contains Hive queries.
query_list This property is required. QueryListResponse
A list of queries.
script_variables This property is required. Mapping[str, str]
Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
continueOnFailure This property is required. Boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
properties This property is required. Map<String>
Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
queryFileUri This property is required. String
The HCFS URI of the script that contains Hive queries.
queryList This property is required. Property Map
A list of queries.
scriptVariables This property is required. Map<String>
Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

InstanceGroupConfigResponse

Accelerators This property is required. List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigResponse>
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigResponse
Optional. Disk option config settings.
ImageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceNames This property is required. List<string>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
InstanceReferences This property is required. List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceReferenceResponse>
List of references to Compute Engine instances.
IsPreemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
MachineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
ManagedGroupConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
MinCpuPlatform This property is required. string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
NumInstances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
Accelerators This property is required. []AcceleratorConfigResponse
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
ImageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceNames This property is required. []string
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
InstanceReferences This property is required. []InstanceReferenceResponse
List of references to Compute Engine instances.
IsPreemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
MachineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
ManagedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
MinCpuPlatform This property is required. string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
NumInstances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. List<AcceleratorConfigResponse>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
imageUri This property is required. String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceNames This property is required. List<String>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. List<InstanceReferenceResponse>
List of references to Compute Engine instances.
isPreemptible This property is required. Boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. String
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances This property is required. Integer
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. AcceleratorConfigResponse[]
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
imageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceNames This property is required. string[]
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. InstanceReferenceResponse[]
List of references to Compute Engine instances.
isPreemptible This property is required. boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances This property is required. number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. Sequence[AcceleratorConfigResponse]
Optional. The Compute Engine accelerator configuration for these instances.
disk_config This property is required. DiskConfigResponse
Optional. Disk option config settings.
image_uri This property is required. str
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instance_names This property is required. Sequence[str]
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instance_references This property is required. Sequence[InstanceReferenceResponse]
List of references to Compute Engine instances.
is_preemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
machine_type_uri This property is required. str
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managed_group_config This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
min_cpu_platform This property is required. str
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
num_instances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. str
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. List<Property Map>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. Property Map
Optional. Disk option config settings.
imageUri This property is required. String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceNames This property is required. List<String>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. List<Property Map>
List of references to Compute Engine instances.
isPreemptible This property is required. Boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. Property Map
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. String
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances This property is required. Number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

InstanceReferenceResponse

InstanceId This property is required. string
The unique identifier of the Compute Engine instance.
InstanceName This property is required. string
The user-friendly name of the Compute Engine instance.
PublicKey This property is required. string
The public key used for sharing data with this instance.
InstanceId This property is required. string
The unique identifier of the Compute Engine instance.
InstanceName This property is required. string
The user-friendly name of the Compute Engine instance.
PublicKey This property is required. string
The public key used for sharing data with this instance.
instanceId This property is required. String
The unique identifier of the Compute Engine instance.
instanceName This property is required. String
The user-friendly name of the Compute Engine instance.
publicKey This property is required. String
The public key used for sharing data with this instance.
instanceId This property is required. string
The unique identifier of the Compute Engine instance.
instanceName This property is required. string
The user-friendly name of the Compute Engine instance.
publicKey This property is required. string
The public key used for sharing data with this instance.
instance_id This property is required. str
The unique identifier of the Compute Engine instance.
instance_name This property is required. str
The user-friendly name of the Compute Engine instance.
public_key This property is required. str
The public key used for sharing data with this instance.
instanceId This property is required. String
The unique identifier of the Compute Engine instance.
instanceName This property is required. String
The user-friendly name of the Compute Engine instance.
publicKey This property is required. String
The public key used for sharing data with this instance.

JobSchedulingResponse

MaxFailuresPerHour This property is required. int
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
MaxFailuresTotal This property is required. int
Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
MaxFailuresPerHour This property is required. int
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
MaxFailuresTotal This property is required. int
Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
maxFailuresPerHour This property is required. Integer
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
maxFailuresTotal This property is required. Integer
Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
maxFailuresPerHour This property is required. number
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
maxFailuresTotal This property is required. number
Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
max_failures_per_hour This property is required. int
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
max_failures_total This property is required. int
Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
maxFailuresPerHour This property is required. Number
Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
maxFailuresTotal This property is required. Number
Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.

KerberosConfigResponse

CrossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
CrossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. Integer
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
cross_realm_trust_admin_server This property is required. str
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_kdc This property is required. str
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_realm This property is required. str
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
cross_realm_trust_shared_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enable_kerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdc_db_key_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
key_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystore_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystore_uri This property is required. str
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kms_key_uri This property is required. str
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. str
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
root_principal_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgt_lifetime_hours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststore_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststore_uri This property is required. str
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. Number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

LifecycleConfigResponse

AutoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. String
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. String
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_time This property is required. str
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_ttl This property is required. str
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_delete_ttl This property is required. str
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_start_time This property is required. str
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. String
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. String
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

LoggingConfigResponse

DriverLogLevels This property is required. Dictionary<string, string>
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
DriverLogLevels This property is required. map[string]string
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
driverLogLevels This property is required. Map<String,String>
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
driverLogLevels This property is required. {[key: string]: string}
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
driver_log_levels This property is required. Mapping[str, str]
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
driverLogLevels This property is required. Map<String>
The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

ManagedClusterResponse

ClusterName This property is required. string
The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
Config This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ClusterConfigResponse
The cluster configuration.
Labels This property is required. Dictionary<string, string>
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
ClusterName This property is required. string
The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
Config This property is required. ClusterConfigResponse
The cluster configuration.
Labels This property is required. map[string]string
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
clusterName This property is required. String
The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
config This property is required. ClusterConfigResponse
The cluster configuration.
labels This property is required. Map<String,String>
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
clusterName This property is required. string
The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
config This property is required. ClusterConfigResponse
The cluster configuration.
labels This property is required. {[key: string]: string}
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
cluster_name This property is required. str
The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
config This property is required. ClusterConfigResponse
The cluster configuration.
labels This property is required. Mapping[str, str]
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
clusterName This property is required. String
The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
config This property is required. Property Map
The cluster configuration.
labels This property is required. Map<String>
Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.

ManagedGroupConfigResponse

InstanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
InstanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
InstanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
InstanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. String
The name of the Instance Group Manager for this group.
instanceTemplateName This property is required. String
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
instanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
instance_group_manager_name This property is required. str
The name of the Instance Group Manager for this group.
instance_template_name This property is required. str
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. String
The name of the Instance Group Manager for this group.
instanceTemplateName This property is required. String
The name of the Instance Template used for the Managed Instance Group.

MetastoreConfigResponse

DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataproc_metastore_service This property is required. str
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

NamespacedGkeDeploymentTargetResponse

ClusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
ClusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
cluster_namespace This property is required. str
Optional. A namespace within the GKE cluster to deploy into.
target_gke_cluster This property is required. str
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NodeGroupAffinityResponse

NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
node_group_uri This property is required. str
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

NodeInitializationActionResponse

ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout This property is required. String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. string
Cloud Storage URI of executable file.
executionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executable_file This property is required. str
Cloud Storage URI of executable file.
execution_timeout This property is required. str
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout This property is required. String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

OrderedJobResponse

HadoopJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.HadoopJobResponse
Optional. Job is a Hadoop job.
HiveJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.HiveJobResponse
Optional. Job is a Hive job.
Labels This property is required. Dictionary<string, string>
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
PigJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.PigJobResponse
Optional. Job is a Pig job.
PrerequisiteStepIds This property is required. List<string>
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
PrestoJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.PrestoJobResponse
Optional. Job is a Presto job.
PysparkJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.PySparkJobResponse
Optional. Job is a PySpark job.
Scheduling This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.JobSchedulingResponse
Optional. Job scheduling configuration.
SparkJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SparkJobResponse
Optional. Job is a Spark job.
SparkRJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SparkRJobResponse
Optional. Job is a SparkR job.
SparkSqlJob This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SparkSqlJobResponse
Optional. Job is a SparkSql job.
StepId This property is required. string
The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
HadoopJob This property is required. HadoopJobResponse
Optional. Job is a Hadoop job.
HiveJob This property is required. HiveJobResponse
Optional. Job is a Hive job.
Labels This property is required. map[string]string
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
PigJob This property is required. PigJobResponse
Optional. Job is a Pig job.
PrerequisiteStepIds This property is required. []string
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
PrestoJob This property is required. PrestoJobResponse
Optional. Job is a Presto job.
PysparkJob This property is required. PySparkJobResponse
Optional. Job is a PySpark job.
Scheduling This property is required. JobSchedulingResponse
Optional. Job scheduling configuration.
SparkJob This property is required. SparkJobResponse
Optional. Job is a Spark job.
SparkRJob This property is required. SparkRJobResponse
Optional. Job is a SparkR job.
SparkSqlJob This property is required. SparkSqlJobResponse
Optional. Job is a SparkSql job.
StepId This property is required. string
The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
hadoopJob This property is required. HadoopJobResponse
Optional. Job is a Hadoop job.
hiveJob This property is required. HiveJobResponse
Optional. Job is a Hive job.
labels This property is required. Map<String,String>
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
pigJob This property is required. PigJobResponse
Optional. Job is a Pig job.
prerequisiteStepIds This property is required. List<String>
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
prestoJob This property is required. PrestoJobResponse
Optional. Job is a Presto job.
pysparkJob This property is required. PySparkJobResponse
Optional. Job is a PySpark job.
scheduling This property is required. JobSchedulingResponse
Optional. Job scheduling configuration.
sparkJob This property is required. SparkJobResponse
Optional. Job is a Spark job.
sparkRJob This property is required. SparkRJobResponse
Optional. Job is a SparkR job.
sparkSqlJob This property is required. SparkSqlJobResponse
Optional. Job is a SparkSql job.
stepId This property is required. String
The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
hadoopJob This property is required. HadoopJobResponse
Optional. Job is a Hadoop job.
hiveJob This property is required. HiveJobResponse
Optional. Job is a Hive job.
labels This property is required. {[key: string]: string}
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
pigJob This property is required. PigJobResponse
Optional. Job is a Pig job.
prerequisiteStepIds This property is required. string[]
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
prestoJob This property is required. PrestoJobResponse
Optional. Job is a Presto job.
pysparkJob This property is required. PySparkJobResponse
Optional. Job is a PySpark job.
scheduling This property is required. JobSchedulingResponse
Optional. Job scheduling configuration.
sparkJob This property is required. SparkJobResponse
Optional. Job is a Spark job.
sparkRJob This property is required. SparkRJobResponse
Optional. Job is a SparkR job.
sparkSqlJob This property is required. SparkSqlJobResponse
Optional. Job is a SparkSql job.
stepId This property is required. string
The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
hadoop_job This property is required. HadoopJobResponse
Optional. Job is a Hadoop job.
hive_job This property is required. HiveJobResponse
Optional. Job is a Hive job.
labels This property is required. Mapping[str, str]
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
pig_job This property is required. PigJobResponse
Optional. Job is a Pig job.
prerequisite_step_ids This property is required. Sequence[str]
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
presto_job This property is required. PrestoJobResponse
Optional. Job is a Presto job.
pyspark_job This property is required. PySparkJobResponse
Optional. Job is a PySpark job.
scheduling This property is required. JobSchedulingResponse
Optional. Job scheduling configuration.
spark_job This property is required. SparkJobResponse
Optional. Job is a Spark job.
spark_r_job This property is required. SparkRJobResponse
Optional. Job is a SparkR job.
spark_sql_job This property is required. SparkSqlJobResponse
Optional. Job is a SparkSql job.
step_id This property is required. str
The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
hadoopJob This property is required. Property Map
Optional. Job is a Hadoop job.
hiveJob This property is required. Property Map
Optional. Job is a Hive job.
labels This property is required. Map<String>
Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
pigJob This property is required. Property Map
Optional. Job is a Pig job.
prerequisiteStepIds This property is required. List<String>
Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
prestoJob This property is required. Property Map
Optional. Job is a Presto job.
pysparkJob This property is required. Property Map
Optional. Job is a PySpark job.
scheduling This property is required. Property Map
Optional. Job scheduling configuration.
sparkJob This property is required. Property Map
Optional. Job is a Spark job.
sparkRJob This property is required. Property Map
Optional. Job is a SparkR job.
sparkSqlJob This property is required. Property Map
Optional. Job is a SparkSql job.
stepId This property is required. String
The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.

ParameterValidationResponse

Regex This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.RegexValidationResponse
Validation based on regular expressions.
Values This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ValueValidationResponse
Validation based on a list of allowed values.
Regex This property is required. RegexValidationResponse
Validation based on regular expressions.
Values This property is required. ValueValidationResponse
Validation based on a list of allowed values.
regex This property is required. RegexValidationResponse
Validation based on regular expressions.
values This property is required. ValueValidationResponse
Validation based on a list of allowed values.
regex This property is required. RegexValidationResponse
Validation based on regular expressions.
values This property is required. ValueValidationResponse
Validation based on a list of allowed values.
regex This property is required. RegexValidationResponse
Validation based on regular expressions.
values This property is required. ValueValidationResponse
Validation based on a list of allowed values.
regex This property is required. Property Map
Validation based on regular expressions.
values This property is required. Property Map
Validation based on a list of allowed values.

PigJobResponse

ContinueOnFailure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
QueryFileUri This property is required. string
The HCFS URI of the script that contains the Pig queries.
QueryList This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.QueryListResponse
A list of queries.
ScriptVariables This property is required. Dictionary<string, string>
Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
ContinueOnFailure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
QueryFileUri This property is required. string
The HCFS URI of the script that contains the Pig queries.
QueryList This property is required. QueryListResponse
A list of queries.
ScriptVariables This property is required. map[string]string
Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
continueOnFailure This property is required. Boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
queryFileUri This property is required. String
The HCFS URI of the script that contains the Pig queries.
queryList This property is required. QueryListResponse
A list of queries.
scriptVariables This property is required. Map<String,String>
Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
continueOnFailure This property is required. boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
queryFileUri This property is required. string
The HCFS URI of the script that contains the Pig queries.
queryList This property is required. QueryListResponse
A list of queries.
scriptVariables This property is required. {[key: string]: string}
Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
continue_on_failure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
query_file_uri This property is required. str
The HCFS URI of the script that contains the Pig queries.
query_list This property is required. QueryListResponse
A list of queries.
script_variables This property is required. Mapping[str, str]
Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
continueOnFailure This property is required. Boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
properties This property is required. Map<String>
Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
queryFileUri This property is required. String
The HCFS URI of the script that contains the Pig queries.
queryList This property is required. Property Map
A list of queries.
scriptVariables This property is required. Map<String>
Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

PrestoJobResponse

ClientTags This property is required. List<string>
Optional. Presto client tags to attach to this query
ContinueOnFailure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
OutputFormat This property is required. string
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
QueryFileUri This property is required. string
The HCFS URI of the script that contains SQL queries.
QueryList This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.QueryListResponse
A list of queries.
ClientTags This property is required. []string
Optional. Presto client tags to attach to this query
ContinueOnFailure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
OutputFormat This property is required. string
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
Properties This property is required. map[string]string
Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
QueryFileUri This property is required. string
The HCFS URI of the script that contains SQL queries.
QueryList This property is required. QueryListResponse
A list of queries.
clientTags This property is required. List<String>
Optional. Presto client tags to attach to this query
continueOnFailure This property is required. Boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
outputFormat This property is required. String
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
properties This property is required. Map<String,String>
Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
queryFileUri This property is required. String
The HCFS URI of the script that contains SQL queries.
queryList This property is required. QueryListResponse
A list of queries.
clientTags This property is required. string[]
Optional. Presto client tags to attach to this query
continueOnFailure This property is required. boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
outputFormat This property is required. string
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
queryFileUri This property is required. string
The HCFS URI of the script that contains SQL queries.
queryList This property is required. QueryListResponse
A list of queries.
client_tags This property is required. Sequence[str]
Optional. Presto client tags to attach to this query
continue_on_failure This property is required. bool
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
output_format This property is required. str
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
query_file_uri This property is required. str
The HCFS URI of the script that contains SQL queries.
query_list This property is required. QueryListResponse
A list of queries.
clientTags This property is required. List<String>
Optional. Presto client tags to attach to this query
continueOnFailure This property is required. Boolean
Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
outputFormat This property is required. String
Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
properties This property is required. Map<String>
Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
queryFileUri This property is required. String
The HCFS URI of the script that contains SQL queries.
queryList This property is required. Property Map
A list of queries.

PySparkJobResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. List<string>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
MainPythonFileUri This property is required. string
The HCFS URI of the main Python file to use as the driver. Must be a .py file.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
PythonFileUris This property is required. List<string>
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. []string
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. []string
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
MainPythonFileUri This property is required. string
The HCFS URI of the main Python file to use as the driver. Must be a .py file.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
PythonFileUris This property is required. []string
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainPythonFileUri This property is required. String
The HCFS URI of the main Python file to use as the driver. Must be a .py file.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
pythonFileUris This property is required. List<String>
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. string[]
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. string[]
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainPythonFileUri This property is required. string
The HCFS URI of the main Python file to use as the driver. Must be a .py file.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
pythonFileUris This property is required. string[]
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
file_uris This property is required. Sequence[str]
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
main_python_file_uri This property is required. str
The HCFS URI of the main Python file to use as the driver. Must be a .py file.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
python_file_uris This property is required. Sequence[str]
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
mainPythonFileUri This property is required. String
The HCFS URI of the main Python file to use as the driver. Must be a .py file.
properties This property is required. Map<String>
Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
pythonFileUris This property is required. List<String>
Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

QueryListResponse

Queries This property is required. List<string>
The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
Queries This property is required. []string
The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
queries This property is required. List<String>
The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
queries This property is required. string[]
The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
queries This property is required. Sequence[str]
The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
queries This property is required. List<String>
The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }

RegexValidationResponse

Regexes This property is required. List<string>
RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
Regexes This property is required. []string
RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
regexes This property is required. List<String>
RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
regexes This property is required. string[]
RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
regexes This property is required. Sequence[str]
RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
regexes This property is required. List<String>
RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).

ReservationAffinityResponse

ConsumeReservationType This property is required. string
Optional. Type of reservation to consume
Key This property is required. string
Optional. Corresponds to the label key of reservation resource.
Values This property is required. List<string>
Optional. Corresponds to the label values of reservation resource.
ConsumeReservationType This property is required. string
Optional. Type of reservation to consume
Key This property is required. string
Optional. Corresponds to the label key of reservation resource.
Values This property is required. []string
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. String
Optional. Type of reservation to consume
key This property is required. String
Optional. Corresponds to the label key of reservation resource.
values This property is required. List<String>
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. string
Optional. Type of reservation to consume
key This property is required. string
Optional. Corresponds to the label key of reservation resource.
values This property is required. string[]
Optional. Corresponds to the label values of reservation resource.
consume_reservation_type This property is required. str
Optional. Type of reservation to consume
key This property is required. str
Optional. Corresponds to the label key of reservation resource.
values This property is required. Sequence[str]
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. String
Optional. Type of reservation to consume
key This property is required. String
Optional. Corresponds to the label key of reservation resource.
values This property is required. List<String>
Optional. Corresponds to the label values of reservation resource.

SecurityConfigResponse

KerberosConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.KerberosConfigResponse
Optional. Kerberos related configuration.
KerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberos_config This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberosConfig This property is required. Property Map
Optional. Kerberos related configuration.

ShieldedInstanceConfigResponse

EnableIntegrityMonitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
EnableIntegrityMonitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. Boolean
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. boolean
Optional. Defines whether instances have the vTPM enabled.
enable_integrity_monitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
enable_secure_boot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
enable_vtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. Boolean
Optional. Defines whether instances have the vTPM enabled.

SoftwareConfigResponse

ImageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents This property is required. List<string>
The set of optional components to activate on the cluster.
Properties This property is required. Dictionary<string, string>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ImageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents This property is required. []string
The set of optional components to activate on the cluster.
Properties This property is required. map[string]string
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. List<String>
The set of optional components to activate on the cluster.
properties This property is required. Map<String,String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. string[]
The set of optional components to activate on the cluster.
properties This property is required. {[key: string]: string}
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
image_version This property is required. str
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optional_components This property is required. Sequence[str]
The set of optional components to activate on the cluster.
properties This property is required. Mapping[str, str]
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. List<String>
The set of optional components to activate on the cluster.
properties This property is required. Map<String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

SparkJobResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. List<string>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
MainClass This property is required. string
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
MainJarFileUri This property is required. string
The HCFS URI of the jar file that contains the main class.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. []string
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. []string
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
MainClass This property is required. string
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
MainJarFileUri This property is required. string
The HCFS URI of the jar file that contains the main class.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainClass This property is required. String
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
mainJarFileUri This property is required. String
The HCFS URI of the jar file that contains the main class.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. string[]
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. string[]
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainClass This property is required. string
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
mainJarFileUri This property is required. string
The HCFS URI of the jar file that contains the main class.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
file_uris This property is required. Sequence[str]
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
main_class This property is required. str
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
main_jar_file_uri This property is required. str
The HCFS URI of the jar file that contains the main class.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
mainClass This property is required. String
The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
mainJarFileUri This property is required. String
The HCFS URI of the jar file that contains the main class.
properties This property is required. Map<String>
Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

SparkRJobResponse

ArchiveUris This property is required. List<string>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. List<string>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. List<string>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
MainRFileUri This property is required. string
The HCFS URI of the main R file to use as the driver. Must be a .R file.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
ArchiveUris This property is required. []string
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
Args This property is required. []string
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
FileUris This property is required. []string
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
MainRFileUri This property is required. string
The HCFS URI of the main R file to use as the driver. Must be a .R file.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainRFileUri This property is required. String
The HCFS URI of the main R file to use as the driver. Must be a .R file.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archiveUris This property is required. string[]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. string[]
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. string[]
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
mainRFileUri This property is required. string
The HCFS URI of the main R file to use as the driver. Must be a .R file.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archive_uris This property is required. Sequence[str]
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. Sequence[str]
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
file_uris This property is required. Sequence[str]
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
main_r_file_uri This property is required. str
The HCFS URI of the main R file to use as the driver. Must be a .R file.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
archiveUris This property is required. List<String>
Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args This property is required. List<String>
Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
fileUris This property is required. List<String>
Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
mainRFileUri This property is required. String
The HCFS URI of the main R file to use as the driver. Must be a .R file.
properties This property is required. Map<String>
Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

SparkSqlJobResponse

JarFileUris This property is required. List<string>
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
LoggingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigResponse
Optional. The runtime log config for job execution.
Properties This property is required. Dictionary<string, string>
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
QueryFileUri This property is required. string
The HCFS URI of the script that contains SQL queries.
QueryList This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.QueryListResponse
A list of queries.
ScriptVariables This property is required. Dictionary<string, string>
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
JarFileUris This property is required. []string
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
LoggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
Properties This property is required. map[string]string
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
QueryFileUri This property is required. string
The HCFS URI of the script that contains SQL queries.
QueryList This property is required. QueryListResponse
A list of queries.
ScriptVariables This property is required. map[string]string
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
properties This property is required. Map<String,String>
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
queryFileUri This property is required. String
The HCFS URI of the script that contains SQL queries.
queryList This property is required. QueryListResponse
A list of queries.
scriptVariables This property is required. Map<String,String>
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jarFileUris This property is required. string[]
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
loggingConfig This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
properties This property is required. {[key: string]: string}
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
queryFileUri This property is required. string
The HCFS URI of the script that contains SQL queries.
queryList This property is required. QueryListResponse
A list of queries.
scriptVariables This property is required. {[key: string]: string}
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jar_file_uris This property is required. Sequence[str]
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
logging_config This property is required. LoggingConfigResponse
Optional. The runtime log config for job execution.
properties This property is required. Mapping[str, str]
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
query_file_uri This property is required. str
The HCFS URI of the script that contains SQL queries.
query_list This property is required. QueryListResponse
A list of queries.
script_variables This property is required. Mapping[str, str]
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
jarFileUris This property is required. List<String>
Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
loggingConfig This property is required. Property Map
Optional. The runtime log config for job execution.
properties This property is required. Map<String>
Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
queryFileUri This property is required. String
The HCFS URI of the script that contains SQL queries.
queryList This property is required. Property Map
A list of queries.
scriptVariables This property is required. Map<String>
Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

TemplateParameterResponse

Description This property is required. string
Optional. Brief description of the parameter. Must not exceed 1024 characters.
Fields This property is required. List<string>
Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
Name This property is required. string
Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
Validation This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ParameterValidationResponse
Optional. Validation rules to be applied to this parameter's value.
Description This property is required. string
Optional. Brief description of the parameter. Must not exceed 1024 characters.
Fields This property is required. []string
Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
Name This property is required. string
Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
Validation This property is required. ParameterValidationResponse
Optional. Validation rules to be applied to this parameter's value.
description This property is required. String
Optional. Brief description of the parameter. Must not exceed 1024 characters.
fields This property is required. List<String>
Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
name This property is required. String
Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
validation This property is required. ParameterValidationResponse
Optional. Validation rules to be applied to this parameter's value.
description This property is required. string
Optional. Brief description of the parameter. Must not exceed 1024 characters.
fields This property is required. string[]
Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
name This property is required. string
Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
validation This property is required. ParameterValidationResponse
Optional. Validation rules to be applied to this parameter's value.
description This property is required. str
Optional. Brief description of the parameter. Must not exceed 1024 characters.
fields This property is required. Sequence[str]
Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
name This property is required. str
Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
validation This property is required. ParameterValidationResponse
Optional. Validation rules to be applied to this parameter's value.
description This property is required. String
Optional. Brief description of the parameter. Must not exceed 1024 characters.
fields This property is required. List<String>
Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
name This property is required. String
Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
validation This property is required. Property Map
Optional. Validation rules to be applied to this parameter's value.

ValueValidationResponse

Values This property is required. List<string>
List of allowed values for the parameter.
Values This property is required. []string
List of allowed values for the parameter.
values This property is required. List<String>
List of allowed values for the parameter.
values This property is required. string[]
List of allowed values for the parameter.
values This property is required. Sequence[str]
List of allowed values for the parameter.
values This property is required. List<String>
List of allowed values for the parameter.

WorkflowTemplatePlacementResponse

ClusterSelector This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ClusterSelectorResponse
Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
ManagedCluster This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ManagedClusterResponse
Optional. A cluster that is managed by the workflow.
ClusterSelector This property is required. ClusterSelectorResponse
Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
ManagedCluster This property is required. ManagedClusterResponse
Optional. A cluster that is managed by the workflow.
clusterSelector This property is required. ClusterSelectorResponse
Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
managedCluster This property is required. ManagedClusterResponse
Optional. A cluster that is managed by the workflow.
clusterSelector This property is required. ClusterSelectorResponse
Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
managedCluster This property is required. ManagedClusterResponse
Optional. A cluster that is managed by the workflow.
cluster_selector This property is required. ClusterSelectorResponse
Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
managed_cluster This property is required. ManagedClusterResponse
Optional. A cluster that is managed by the workflow.
clusterSelector This property is required. Property Map
Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
managedCluster This property is required. Property Map
Optional. A cluster that is managed by the workflow.

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi