Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1beta2.getWorkflowTemplate
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Retrieves the latest workflow template.Can retrieve previously instantiated template by specifying optional version parameter.
Using getWorkflowTemplate
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getWorkflowTemplate(args: GetWorkflowTemplateArgs, opts?: InvokeOptions): Promise<GetWorkflowTemplateResult>
function getWorkflowTemplateOutput(args: GetWorkflowTemplateOutputArgs, opts?: InvokeOptions): Output<GetWorkflowTemplateResult>
def get_workflow_template(location: Optional[str] = None,
project: Optional[str] = None,
version: Optional[int] = None,
workflow_template_id: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetWorkflowTemplateResult
def get_workflow_template_output(location: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[int]] = None,
workflow_template_id: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetWorkflowTemplateResult]
func LookupWorkflowTemplate(ctx *Context, args *LookupWorkflowTemplateArgs, opts ...InvokeOption) (*LookupWorkflowTemplateResult, error)
func LookupWorkflowTemplateOutput(ctx *Context, args *LookupWorkflowTemplateOutputArgs, opts ...InvokeOption) LookupWorkflowTemplateResultOutput
> Note: This function is named LookupWorkflowTemplate
in the Go SDK.
public static class GetWorkflowTemplate
{
public static Task<GetWorkflowTemplateResult> InvokeAsync(GetWorkflowTemplateArgs args, InvokeOptions? opts = null)
public static Output<GetWorkflowTemplateResult> Invoke(GetWorkflowTemplateInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetWorkflowTemplateResult> getWorkflowTemplate(GetWorkflowTemplateArgs args, InvokeOptions options)
public static Output<GetWorkflowTemplateResult> getWorkflowTemplate(GetWorkflowTemplateArgs args, InvokeOptions options)
fn::invoke:
function: google-native:dataproc/v1beta2:getWorkflowTemplate
arguments:
# arguments dictionary
The following arguments are supported:
- Location
This property is required. string - Workflow
Template Id This property is required. string - Project string
- Version int
- Location
This property is required. string - Workflow
Template Id This property is required. string - Project string
- Version int
- location
This property is required. String - workflow
Template Id This property is required. String - project String
- version Integer
- location
This property is required. string - workflow
Template Id This property is required. string - project string
- version number
- location
This property is required. str - workflow_
template_ id This property is required. str - project str
- version int
- location
This property is required. String - workflow
Template Id This property is required. String - project String
- version Number
getWorkflowTemplate Result
The following output properties are available:
- Create
Time string - The time template was created.
- Dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- Jobs
List<Pulumi.
Google Native. Dataproc. V1Beta2. Outputs. Ordered Job Response> - The Directed Acyclic Graph of Jobs to submit.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- Name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Parameters
List<Pulumi.
Google Native. Dataproc. V1Beta2. Outputs. Template Parameter Response> - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Placement
Pulumi.
Google Native. Dataproc. V1Beta2. Outputs. Workflow Template Placement Response - WorkflowTemplate scheduling information.
- Update
Time string - The time template was last updated.
- Version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- Create
Time string - The time template was created.
- Dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- Jobs
[]Ordered
Job Response - The Directed Acyclic Graph of Jobs to submit.
- Labels map[string]string
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- Name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Parameters
[]Template
Parameter Response - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Placement
Workflow
Template Placement Response - WorkflowTemplate scheduling information.
- Update
Time string - The time template was last updated.
- Version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- create
Time String - The time template was created.
- dag
Timeout String - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- jobs
List<Ordered
Job Response> - The Directed Acyclic Graph of Jobs to submit.
- labels Map<String,String>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- name String
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- parameters
List<Template
Parameter Response> - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- placement
Workflow
Template Placement Response - WorkflowTemplate scheduling information.
- update
Time String - The time template was last updated.
- version Integer
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- create
Time string - The time template was created.
- dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- jobs
Ordered
Job Response[] - The Directed Acyclic Graph of Jobs to submit.
- labels {[key: string]: string}
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- parameters
Template
Parameter Response[] - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- placement
Workflow
Template Placement Response - WorkflowTemplate scheduling information.
- update
Time string - The time template was last updated.
- version number
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- create_
time str - The time template was created.
- dag_
timeout str - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- jobs
Sequence[Ordered
Job Response] - The Directed Acyclic Graph of Jobs to submit.
- labels Mapping[str, str]
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- name str
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- parameters
Sequence[Template
Parameter Response] - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- placement
Workflow
Template Placement Response - WorkflowTemplate scheduling information.
- update_
time str - The time template was last updated.
- version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- create
Time String - The time template was created.
- dag
Timeout String - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- jobs List<Property Map>
- The Directed Acyclic Graph of Jobs to submit.
- labels Map<String>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- name String
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- parameters List<Property Map>
- Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- placement Property Map
- WorkflowTemplate scheduling information.
- update
Time String - The time template was last updated.
- version Number
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
Supporting Types
AcceleratorConfigResponse
- Accelerator
Count This property is required. int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type Uri This property is required. string - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count This property is required. int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type Uri This property is required. string - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count This property is required. Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type Uri This property is required. String - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count This property is required. number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type Uri This property is required. string - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count This property is required. int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ uri This property is required. str - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count This property is required. Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type Uri This property is required. String - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfigResponse
- Policy
Uri This property is required. string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri This property is required. string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri This property is required. String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri This property is required. string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri This property is required. str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri This property is required. String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
ClusterConfigResponse
- Autoscaling
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Autoscaling Config Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Config
Bucket This property is required. string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Encryption
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Encryption Config Response - Optional. Encryption settings for the cluster.
- Endpoint
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Endpoint Config Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Gce Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Gke Cluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions This property is required. List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Node Initialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Lifecycle Config Response - Optional. The config setting for auto delete cluster schedule.
- Master
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- Metastore
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Metastore Config Response - Optional. Metastore configuration.
- Secondary
Worker Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- Security
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Security Config Response - Optional. Security related configuration.
- Software
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Software Config Response - Optional. The config settings for software inside the cluster.
- Temp
Bucket This property is required. string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Worker
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- Autoscaling
Config This property is required. AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Config
Bucket This property is required. string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Encryption
Config This property is required. EncryptionConfig Response - Optional. Encryption settings for the cluster.
- Endpoint
Config This property is required. EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Config This property is required. GceCluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Config This property is required. GkeCluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions This property is required. []NodeInitialization Action Response - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config This property is required. LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- Master
Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- Metastore
Config This property is required. MetastoreConfig Response - Optional. Metastore configuration.
- Secondary
Worker Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- Security
Config This property is required. SecurityConfig Response - Optional. Security related configuration.
- Software
Config This property is required. SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- Temp
Bucket This property is required. string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Worker
Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config This property is required. AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket This property is required. String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config This property is required. EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config This property is required. EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Config This property is required. GceCluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Config This property is required. GkeCluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions This property is required. List<NodeInitialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config This property is required. LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- master
Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config This property is required. MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config This property is required. SecurityConfig Response - Optional. Security related configuration.
- software
Config This property is required. SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- temp
Bucket This property is required. String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config This property is required. AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket This property is required. string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config This property is required. EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config This property is required. EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Config This property is required. GceCluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Config This property is required. GkeCluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions This property is required. NodeInitialization Action Response[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config This property is required. LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- master
Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config This property is required. MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config This property is required. SecurityConfig Response - Optional. Security related configuration.
- software
Config This property is required. SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- temp
Bucket This property is required. string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling_
config This property is required. AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config_
bucket This property is required. str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption_
config This property is required. EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint_
config This property is required. EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ config This property is required. GceCluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ config This property is required. GkeCluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions This property is required. Sequence[NodeInitialization Action Response] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config This property is required. LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- master_
config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore_
config This property is required. MetastoreConfig Response - Optional. Metastore configuration.
- secondary_
worker_ config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security_
config This property is required. SecurityConfig Response - Optional. Security related configuration.
- software_
config This property is required. SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- temp_
bucket This property is required. str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker_
config This property is required. InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config This property is required. Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket This property is required. String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config This property is required. Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config This property is required. Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Config This property is required. Property Map - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Config This property is required. Property Map - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions This property is required. List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config This property is required. Property Map - Optional. The config setting for auto delete cluster schedule.
- master
Config This property is required. Property Map - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config This property is required. Property Map - Optional. Metastore configuration.
- secondary
Worker Config This property is required. Property Map - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config This property is required. Property Map - Optional. Security related configuration.
- software
Config This property is required. Property Map - Optional. The config settings for software inside the cluster.
- temp
Bucket This property is required. String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config This property is required. Property Map - Optional. The Compute Engine config settings for worker instances in a cluster.
ClusterSelectorResponse
- Cluster
Labels This property is required. Dictionary<string, string> - The cluster labels. Cluster must have all labels to match.
- Zone
This property is required. string - Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- Cluster
Labels This property is required. map[string]string - The cluster labels. Cluster must have all labels to match.
- Zone
This property is required. string - Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels This property is required. Map<String,String> - The cluster labels. Cluster must have all labels to match.
- zone
This property is required. String - Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels This property is required. {[key: string]: string} - The cluster labels. Cluster must have all labels to match.
- zone
This property is required. string - Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster_
labels This property is required. Mapping[str, str] - The cluster labels. Cluster must have all labels to match.
- zone
This property is required. str - Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels This property is required. Map<String> - The cluster labels. Cluster must have all labels to match.
- zone
This property is required. String - Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
DiskConfigResponse
- Boot
Disk Size Gb This property is required. int - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk Type This property is required. string - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Num
Local Ssds This property is required. int - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- Boot
Disk Size Gb This property is required. int - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk Type This property is required. string - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Num
Local Ssds This property is required. int - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk Size Gb This property is required. Integer - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk Type This property is required. String - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local Ssds This property is required. Integer - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk Size Gb This property is required. number - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk Type This property is required. string - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local Ssds This property is required. number - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot_
disk_ size_ gb This property is required. int - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ type This property is required. str - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num_
local_ ssds This property is required. int - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk Size Gb This property is required. Number - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk Type This property is required. String - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local Ssds This property is required. Number - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
EncryptionConfigResponse
- Gce
Pd Kms Key Name This property is required. string - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Gce
Pd Kms Key Name This property is required. string - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd Kms Key Name This property is required. String - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd Kms Key Name This property is required. string - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce_
pd_ kms_ key_ name This property is required. str - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd Kms Key Name This property is required. String - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
EndpointConfigResponse
- Enable
Http Port Access This property is required. bool - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports This property is required. Dictionary<string, string> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- Enable
Http Port Access This property is required. bool - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports This property is required. map[string]string - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http Port Access This property is required. Boolean - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports This property is required. Map<String,String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http Port Access This property is required. boolean - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports This property is required. {[key: string]: string} - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_
http_ port_ access This property is required. bool - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_
ports This property is required. Mapping[str, str] - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http Port Access This property is required. Boolean - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports This property is required. Map<String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
GceClusterConfigResponse
- Internal
Ip Only This property is required. bool - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata
This property is required. Dictionary<string, string> - The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri This property is required. string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- Node
Group Affinity This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Node Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google Access This property is required. string - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Reservation Affinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account This property is required. string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account Scopes This property is required. List<string> - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Shielded Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri This property is required. string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
This property is required. List<string>- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri This property is required. string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- Internal
Ip Only This property is required. bool - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata
This property is required. map[string]string - The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri This property is required. string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- Node
Group Affinity This property is required. NodeGroup Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google Access This property is required. string - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity This property is required. ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account This property is required. string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account Scopes This property is required. []string - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Config This property is required. ShieldedInstance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri This property is required. string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
This property is required. []string- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri This property is required. string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip Only This property is required. Boolean - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata
This property is required. Map<String,String> - The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri This property is required. String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group Affinity This property is required. NodeGroup Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google Access This property is required. String - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity This property is required. ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account This property is required. String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account Scopes This property is required. List<String> - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Config This property is required. ShieldedInstance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri This property is required. String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
This property is required. List<String>- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri This property is required. String - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip Only This property is required. boolean - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata
This property is required. {[key: string]: string} - The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri This property is required. string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group Affinity This property is required. NodeGroup Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google Access This property is required. string - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity This property is required. ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account This property is required. string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account Scopes This property is required. string[] - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Config This property is required. ShieldedInstance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri This property is required. string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
This property is required. string[]- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri This property is required. string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal_
ip_ only This property is required. bool - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata
This property is required. Mapping[str, str] - The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri This property is required. str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node_
group_ affinity This property is required. NodeGroup Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ google_ access This property is required. str - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity This property is required. ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account This property is required. str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ scopes This property is required. Sequence[str] - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ config This property is required. ShieldedInstance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri This property is required. str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
This property is required. Sequence[str]- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri This property is required. str - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip Only This property is required. Boolean - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata
This property is required. Map<String> - The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri This property is required. String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group Affinity This property is required. Property Map - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google Access This property is required. String - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity This property is required. Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account This property is required. String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account Scopes This property is required. List<String> - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Config This property is required. Property Map - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri This property is required. String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
This property is required. List<String>- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri This property is required. String - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
GkeClusterConfigResponse
- Namespaced
Gke Deployment Target This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Namespaced Gke Deployment Target Response - Optional. A target for the deployment.
- Namespaced
Gke Deployment Target This property is required. NamespacedGke Deployment Target Response - Optional. A target for the deployment.
- namespaced
Gke Deployment Target This property is required. NamespacedGke Deployment Target Response - Optional. A target for the deployment.
- namespaced
Gke Deployment Target This property is required. NamespacedGke Deployment Target Response - Optional. A target for the deployment.
- namespaced_
gke_ deployment_ target This property is required. NamespacedGke Deployment Target Response - Optional. A target for the deployment.
- namespaced
Gke Deployment Target This property is required. Property Map - Optional. A target for the deployment.
HadoopJobResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. List<string> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Jar
File Uris This property is required. List<string> - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Class This property is required. string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar File Uri This property is required. string - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. []string - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Jar
File Uris This property is required. []string - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Class This property is required. string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar File Uri This property is required. string - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File Uris This property is required. List<String> - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class This property is required. String - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar File Uri This property is required. String - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. string[] - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File Uris This property is required. string[] - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class This property is required. string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar File Uri This property is required. string - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar_
file_ uris This property is required. Sequence[str] - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
class This property is required. str - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_
jar_ file_ uri This property is required. str - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File Uris This property is required. List<String> - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- main
Class This property is required. String - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar File Uri This property is required. String - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
HiveJobResponse
- Continue
On Failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains Hive queries.
- Query
List This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- Script
Variables This property is required. Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- Continue
On Failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties
This property is required. map[string]string - Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains Hive queries.
- Query
List This property is required. QueryList Response - A list of queries.
- Script
Variables This property is required. map[string]string - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On Failure This property is required. Boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File Uri This property is required. String - The HCFS URI of the script that contains Hive queries.
- query
List This property is required. QueryList Response - A list of queries.
- script
Variables This property is required. Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On Failure This property is required. boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File Uri This property is required. string - The HCFS URI of the script that contains Hive queries.
- query
List This property is required. QueryList Response - A list of queries.
- script
Variables This property is required. {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue_
on_ failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query_
file_ uri This property is required. str - The HCFS URI of the script that contains Hive queries.
- query_
list This property is required. QueryList Response - A list of queries.
- script_
variables This property is required. Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On Failure This property is required. Boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties
This property is required. Map<String> - Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File Uri This property is required. String - The HCFS URI of the script that contains Hive queries.
- query
List This property is required. Property Map - A list of queries.
- script
Variables This property is required. Map<String> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
InstanceGroupConfigResponse
- Accelerators
This property is required. List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Accelerator Config Response> - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Disk Config Response - Optional. Disk option config settings.
- Image
Uri This property is required. string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Names This property is required. List<string> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References This property is required. List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Reference Response> - List of references to Compute Engine instances.
- Is
Preemptible This property is required. bool - Specifies that this instance group contains preemptible instances.
- Machine
Type Uri This property is required. string - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Managed Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu Platform This property is required. string - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances This property is required. int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
This property is required. string - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Accelerators
This property is required. []AcceleratorConfig Response - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config This property is required. DiskConfig Response - Optional. Disk option config settings.
- Image
Uri This property is required. string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Names This property is required. []string - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References This property is required. []InstanceReference Response - List of references to Compute Engine instances.
- Is
Preemptible This property is required. bool - Specifies that this instance group contains preemptible instances.
- Machine
Type Uri This property is required. string - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group Config This property is required. ManagedGroup Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu Platform This property is required. string - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances This property is required. int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
This property is required. string - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
This property is required. List<AcceleratorConfig Response> - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config This property is required. DiskConfig Response - Optional. Disk option config settings.
- image
Uri This property is required. String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names This property is required. List<String> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References This property is required. List<InstanceReference Response> - List of references to Compute Engine instances.
- is
Preemptible This property is required. Boolean - Specifies that this instance group contains preemptible instances.
- machine
Type Uri This property is required. String - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group Config This property is required. ManagedGroup Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu Platform This property is required. String - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances This property is required. Integer - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
This property is required. String - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
This property is required. AcceleratorConfig Response[] - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config This property is required. DiskConfig Response - Optional. Disk option config settings.
- image
Uri This property is required. string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names This property is required. string[] - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References This property is required. InstanceReference Response[] - List of references to Compute Engine instances.
- is
Preemptible This property is required. boolean - Specifies that this instance group contains preemptible instances.
- machine
Type Uri This property is required. string - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group Config This property is required. ManagedGroup Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu Platform This property is required. string - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances This property is required. number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
This property is required. string - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
This property is required. Sequence[AcceleratorConfig Response] - Optional. The Compute Engine accelerator configuration for these instances.
- disk_
config This property is required. DiskConfig Response - Optional. Disk option config settings.
- image_
uri This property is required. str - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_
names This property is required. Sequence[str] - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance_
references This property is required. Sequence[InstanceReference Response] - List of references to Compute Engine instances.
- is_
preemptible This property is required. bool - Specifies that this instance group contains preemptible instances.
- machine_
type_ uri This property is required. str - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed_
group_ config This property is required. ManagedGroup Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min_
cpu_ platform This property is required. str - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num_
instances This property is required. int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
This property is required. str - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
This property is required. List<Property Map> - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config This property is required. Property Map - Optional. Disk option config settings.
- image
Uri This property is required. String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names This property is required. List<String> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References This property is required. List<Property Map> - List of references to Compute Engine instances.
- is
Preemptible This property is required. Boolean - Specifies that this instance group contains preemptible instances.
- machine
Type Uri This property is required. String - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group Config This property is required. Property Map - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu Platform This property is required. String - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances This property is required. Number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
This property is required. String - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
InstanceReferenceResponse
- Instance
Id This property is required. string - The unique identifier of the Compute Engine instance.
- Instance
Name This property is required. string - The user-friendly name of the Compute Engine instance.
- Public
Key This property is required. string - The public key used for sharing data with this instance.
- Instance
Id This property is required. string - The unique identifier of the Compute Engine instance.
- Instance
Name This property is required. string - The user-friendly name of the Compute Engine instance.
- Public
Key This property is required. string - The public key used for sharing data with this instance.
- instance
Id This property is required. String - The unique identifier of the Compute Engine instance.
- instance
Name This property is required. String - The user-friendly name of the Compute Engine instance.
- public
Key This property is required. String - The public key used for sharing data with this instance.
- instance
Id This property is required. string - The unique identifier of the Compute Engine instance.
- instance
Name This property is required. string - The user-friendly name of the Compute Engine instance.
- public
Key This property is required. string - The public key used for sharing data with this instance.
- instance_
id This property is required. str - The unique identifier of the Compute Engine instance.
- instance_
name This property is required. str - The user-friendly name of the Compute Engine instance.
- public_
key This property is required. str - The public key used for sharing data with this instance.
- instance
Id This property is required. String - The unique identifier of the Compute Engine instance.
- instance
Name This property is required. String - The user-friendly name of the Compute Engine instance.
- public
Key This property is required. String - The public key used for sharing data with this instance.
JobSchedulingResponse
- Max
Failures Per Hour This property is required. int - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- Max
Failures Total This property is required. int - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- Max
Failures Per Hour This property is required. int - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- Max
Failures Total This property is required. int - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures Per Hour This property is required. Integer - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures Total This property is required. Integer - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures Per Hour This property is required. number - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures Total This property is required. number - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max_
failures_ per_ hour This property is required. int - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max_
failures_ total This property is required. int - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures Per Hour This property is required. Number - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures Total This property is required. Number - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
KerberosConfigResponse
- Cross
Realm Trust Admin Server This property is required. string - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm Trust Kdc This property is required. string - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm Trust Realm This property is required. string - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
This property is required. string- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos This property is required. bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db Key Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri This property is required. string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key Uri This property is required. string - Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm
This property is required. string - Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime Hours This property is required. int - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri This property is required. string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Cross
Realm Trust Admin Server This property is required. string - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm Trust Kdc This property is required. string - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm Trust Realm This property is required. string - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
This property is required. string- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos This property is required. bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db Key Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri This property is required. string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key Uri This property is required. string - Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm
This property is required. string - Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime Hours This property is required. int - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri This property is required. string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm Trust Admin Server This property is required. String - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm Trust Kdc This property is required. String - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm Trust Realm This property is required. String - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
This property is required. String- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos This property is required. Boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db Key Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri This property is required. String - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key Uri This property is required. String - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm
This property is required. String - Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime Hours This property is required. Integer - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri This property is required. String - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm Trust Admin Server This property is required. string - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm Trust Kdc This property is required. string - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm Trust Realm This property is required. string - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
This property is required. string- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos This property is required. boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db Key Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri This property is required. string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key Uri This property is required. string - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm
This property is required. string - Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime Hours This property is required. number - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password Uri This property is required. string - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri This property is required. string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_
realm_ trust_ admin_ server This property is required. str - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ trust_ kdc This property is required. str - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ trust_ realm This property is required. str - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
This property is required. str- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_
kerberos This property is required. bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_
db_ key_ uri This property is required. str - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_
password_ uri This property is required. str - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
password_ uri This property is required. str - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
uri This property is required. str - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_
key_ uri This property is required. str - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm
This property is required. str - Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_
principal_ password_ uri This property is required. str - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_
lifetime_ hours This property is required. int - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_
password_ uri This property is required. str - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_
uri This property is required. str - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm Trust Admin Server This property is required. String - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm Trust Kdc This property is required. String - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm Trust Realm This property is required. String - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
This property is required. String- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos This property is required. Boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db Key Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri This property is required. String - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key Uri This property is required. String - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm
This property is required. String - Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime Hours This property is required. Number - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password Uri This property is required. String - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri This property is required. String - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
LifecycleConfigResponse
- Auto
Delete Time This property is required. string - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete Ttl This property is required. string - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete Ttl This property is required. string - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Start Time This property is required. string - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete Time This property is required. string - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete Ttl This property is required. string - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete Ttl This property is required. string - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Start Time This property is required. string - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete Time This property is required. String - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete Ttl This property is required. String - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete Ttl This property is required. String - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start Time This property is required. String - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete Time This property is required. string - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete Ttl This property is required. string - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete Ttl This property is required. string - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start Time This property is required. string - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ time This property is required. str - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ ttl This property is required. str - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
delete_ ttl This property is required. str - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
start_ time This property is required. str - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete Time This property is required. String - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete Ttl This property is required. String - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete Ttl This property is required. String - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start Time This property is required. String - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
LoggingConfigResponse
- Driver
Log Levels This property is required. Dictionary<string, string> - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- Driver
Log Levels This property is required. map[string]string - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Levels This property is required. Map<String,String> - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Levels This property is required. {[key: string]: string} - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver_
log_ levels This property is required. Mapping[str, str] - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Levels This property is required. Map<String> - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
ManagedClusterResponse
- Cluster
Name This property is required. string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Config
This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Cluster Config Response - The cluster configuration.
- Labels
This property is required. Dictionary<string, string> - Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- Cluster
Name This property is required. string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Config
This property is required. ClusterConfig Response - The cluster configuration.
- Labels
This property is required. map[string]string - Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name This property is required. String - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
This property is required. ClusterConfig Response - The cluster configuration.
- labels
This property is required. Map<String,String> - Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name This property is required. string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
This property is required. ClusterConfig Response - The cluster configuration.
- labels
This property is required. {[key: string]: string} - Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster_
name This property is required. str - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
This property is required. ClusterConfig Response - The cluster configuration.
- labels
This property is required. Mapping[str, str] - Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name This property is required. String - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
This property is required. Property Map - The cluster configuration.
- labels
This property is required. Map<String> - Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
ManagedGroupConfigResponse
- Instance
Group Manager Name This property is required. string - The name of the Instance Group Manager for this group.
- Instance
Template Name This property is required. string - The name of the Instance Template used for the Managed Instance Group.
- Instance
Group Manager Name This property is required. string - The name of the Instance Group Manager for this group.
- Instance
Template Name This property is required. string - The name of the Instance Template used for the Managed Instance Group.
- instance
Group Manager Name This property is required. String - The name of the Instance Group Manager for this group.
- instance
Template Name This property is required. String - The name of the Instance Template used for the Managed Instance Group.
- instance
Group Manager Name This property is required. string - The name of the Instance Group Manager for this group.
- instance
Template Name This property is required. string - The name of the Instance Template used for the Managed Instance Group.
- instance_
group_ manager_ name This property is required. str - The name of the Instance Group Manager for this group.
- instance_
template_ name This property is required. str - The name of the Instance Template used for the Managed Instance Group.
- instance
Group Manager Name This property is required. String - The name of the Instance Group Manager for this group.
- instance
Template Name This property is required. String - The name of the Instance Template used for the Managed Instance Group.
MetastoreConfigResponse
- Dataproc
Metastore Service This property is required. string - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- Dataproc
Metastore Service This property is required. string - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore Service This property is required. String - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore Service This property is required. string - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_
metastore_ service This property is required. str - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore Service This property is required. String - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
NamespacedGkeDeploymentTargetResponse
- Cluster
Namespace This property is required. string - Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke Cluster This property is required. string - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Cluster
Namespace This property is required. string - Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke Cluster This property is required. string - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace This property is required. String - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke Cluster This property is required. String - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace This property is required. string - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke Cluster This property is required. string - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_
namespace This property is required. str - Optional. A namespace within the GKE cluster to deploy into.
- target_
gke_ cluster This property is required. str - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace This property is required. String - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke Cluster This property is required. String - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NodeGroupAffinityResponse
- Node
Group Uri This property is required. string - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- Node
Group Uri This property is required. string - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group Uri This property is required. String - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group Uri This property is required. string - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node_
group_ uri This property is required. str - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group Uri This property is required. String - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeInitializationActionResponse
- Executable
File This property is required. string - Cloud Storage URI of executable file.
- Execution
Timeout This property is required. string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- Executable
File This property is required. string - Cloud Storage URI of executable file.
- Execution
Timeout This property is required. string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File This property is required. String - Cloud Storage URI of executable file.
- execution
Timeout This property is required. String - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File This property is required. string - Cloud Storage URI of executable file.
- execution
Timeout This property is required. string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_
file This property is required. str - Cloud Storage URI of executable file.
- execution_
timeout This property is required. str - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File This property is required. String - Cloud Storage URI of executable file.
- execution
Timeout This property is required. String - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
OrderedJobResponse
- Hadoop
Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hadoop Job Response - Optional. Job is a Hadoop job.
- Hive
Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hive Job Response - Optional. Job is a Hive job.
- Labels
This property is required. Dictionary<string, string> - Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- Pig
Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Pig Job Response - Optional. Job is a Pig job.
- Prerequisite
Step Ids This property is required. List<string> - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Presto
Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Presto Job Response - Optional. Job is a Presto job.
- Pyspark
Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Py Spark Job Response - Optional. Job is a PySpark job.
- Scheduling
This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Job Scheduling Response - Optional. Job scheduling configuration.
- Spark
Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark Job Response - Optional. Job is a Spark job.
- Spark
RJob This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark RJob Response - Optional. Job is a SparkR job.
- Spark
Sql Job This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark Sql Job Response - Optional. Job is a SparkSql job.
- Step
Id This property is required. string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- Hadoop
Job This property is required. HadoopJob Response - Optional. Job is a Hadoop job.
- Hive
Job This property is required. HiveJob Response - Optional. Job is a Hive job.
- Labels
This property is required. map[string]string - Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- Pig
Job This property is required. PigJob Response - Optional. Job is a Pig job.
- Prerequisite
Step Ids This property is required. []string - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Presto
Job This property is required. PrestoJob Response - Optional. Job is a Presto job.
- Pyspark
Job This property is required. PySpark Job Response - Optional. Job is a PySpark job.
- Scheduling
This property is required. JobScheduling Response - Optional. Job scheduling configuration.
- Spark
Job This property is required. SparkJob Response - Optional. Job is a Spark job.
- Spark
RJob This property is required. SparkRJob Response - Optional. Job is a SparkR job.
- Spark
Sql Job This property is required. SparkSql Job Response - Optional. Job is a SparkSql job.
- Step
Id This property is required. string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job This property is required. HadoopJob Response - Optional. Job is a Hadoop job.
- hive
Job This property is required. HiveJob Response - Optional. Job is a Hive job.
- labels
This property is required. Map<String,String> - Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job This property is required. PigJob Response - Optional. Job is a Pig job.
- prerequisite
Step Ids This property is required. List<String> - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job This property is required. PrestoJob Response - Optional. Job is a Presto job.
- pyspark
Job This property is required. PySpark Job Response - Optional. Job is a PySpark job.
- scheduling
This property is required. JobScheduling Response - Optional. Job scheduling configuration.
- spark
Job This property is required. SparkJob Response - Optional. Job is a Spark job.
- spark
RJob This property is required. SparkRJob Response - Optional. Job is a SparkR job.
- spark
Sql Job This property is required. SparkSql Job Response - Optional. Job is a SparkSql job.
- step
Id This property is required. String - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job This property is required. HadoopJob Response - Optional. Job is a Hadoop job.
- hive
Job This property is required. HiveJob Response - Optional. Job is a Hive job.
- labels
This property is required. {[key: string]: string} - Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job This property is required. PigJob Response - Optional. Job is a Pig job.
- prerequisite
Step Ids This property is required. string[] - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job This property is required. PrestoJob Response - Optional. Job is a Presto job.
- pyspark
Job This property is required. PySpark Job Response - Optional. Job is a PySpark job.
- scheduling
This property is required. JobScheduling Response - Optional. Job scheduling configuration.
- spark
Job This property is required. SparkJob Response - Optional. Job is a Spark job.
- spark
RJob This property is required. SparkRJob Response - Optional. Job is a SparkR job.
- spark
Sql Job This property is required. SparkSql Job Response - Optional. Job is a SparkSql job.
- step
Id This property is required. string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop_
job This property is required. HadoopJob Response - Optional. Job is a Hadoop job.
- hive_
job This property is required. HiveJob Response - Optional. Job is a Hive job.
- labels
This property is required. Mapping[str, str] - Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig_
job This property is required. PigJob Response - Optional. Job is a Pig job.
- prerequisite_
step_ ids This property is required. Sequence[str] - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto_
job This property is required. PrestoJob Response - Optional. Job is a Presto job.
- pyspark_
job This property is required. PySpark Job Response - Optional. Job is a PySpark job.
- scheduling
This property is required. JobScheduling Response - Optional. Job scheduling configuration.
- spark_
job This property is required. SparkJob Response - Optional. Job is a Spark job.
- spark_
r_ job This property is required. SparkRJob Response - Optional. Job is a SparkR job.
- spark_
sql_ job This property is required. SparkSql Job Response - Optional. Job is a SparkSql job.
- step_
id This property is required. str - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job This property is required. Property Map - Optional. Job is a Hadoop job.
- hive
Job This property is required. Property Map - Optional. Job is a Hive job.
- labels
This property is required. Map<String> - Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job This property is required. Property Map - Optional. Job is a Pig job.
- prerequisite
Step Ids This property is required. List<String> - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job This property is required. Property Map - Optional. Job is a Presto job.
- pyspark
Job This property is required. Property Map - Optional. Job is a PySpark job.
- scheduling
This property is required. Property Map - Optional. Job scheduling configuration.
- spark
Job This property is required. Property Map - Optional. Job is a Spark job.
- spark
RJob This property is required. Property Map - Optional. Job is a SparkR job.
- spark
Sql Job This property is required. Property Map - Optional. Job is a SparkSql job.
- step
Id This property is required. String - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
ParameterValidationResponse
- Regex
This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Regex Validation Response - Validation based on regular expressions.
- Values
This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Value Validation Response - Validation based on a list of allowed values.
- Regex
This property is required. RegexValidation Response - Validation based on regular expressions.
- Values
This property is required. ValueValidation Response - Validation based on a list of allowed values.
- regex
This property is required. RegexValidation Response - Validation based on regular expressions.
- values
This property is required. ValueValidation Response - Validation based on a list of allowed values.
- regex
This property is required. RegexValidation Response - Validation based on regular expressions.
- values
This property is required. ValueValidation Response - Validation based on a list of allowed values.
- regex
This property is required. RegexValidation Response - Validation based on regular expressions.
- values
This property is required. ValueValidation Response - Validation based on a list of allowed values.
- regex
This property is required. Property Map - Validation based on regular expressions.
- values
This property is required. Property Map - Validation based on a list of allowed values.
PigJobResponse
- Continue
On Failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains the Pig queries.
- Query
List This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- Script
Variables This property is required. Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- Continue
On Failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains the Pig queries.
- Query
List This property is required. QueryList Response - A list of queries.
- Script
Variables This property is required. map[string]string - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On Failure This property is required. Boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File Uri This property is required. String - The HCFS URI of the script that contains the Pig queries.
- query
List This property is required. QueryList Response - A list of queries.
- script
Variables This property is required. Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On Failure This property is required. boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File Uri This property is required. string - The HCFS URI of the script that contains the Pig queries.
- query
List This property is required. QueryList Response - A list of queries.
- script
Variables This property is required. {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue_
on_ failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query_
file_ uri This property is required. str - The HCFS URI of the script that contains the Pig queries.
- query_
list This property is required. QueryList Response - A list of queries.
- script_
variables This property is required. Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On Failure This property is required. Boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File Uri This property is required. String - The HCFS URI of the script that contains the Pig queries.
- query
List This property is required. Property Map - A list of queries.
- script
Variables This property is required. Map<String> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
PrestoJobResponse
This property is required. List<string>- Optional. Presto client tags to attach to this query
- Continue
On Failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Output
Format This property is required. string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- Query
File Uri This property is required. string - The HCFS URI of the script that contains SQL queries.
- Query
List This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
This property is required. []string- Optional. Presto client tags to attach to this query
- Continue
On Failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Output
Format This property is required. string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- Query
File Uri This property is required. string - The HCFS URI of the script that contains SQL queries.
- Query
List This property is required. QueryList Response - A list of queries.
This property is required. List<String>- Optional. Presto client tags to attach to this query
- continue
On Failure This property is required. Boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- output
Format This property is required. String - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File Uri This property is required. String - The HCFS URI of the script that contains SQL queries.
- query
List This property is required. QueryList Response - A list of queries.
This property is required. string[]- Optional. Presto client tags to attach to this query
- continue
On Failure This property is required. boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- output
Format This property is required. string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File Uri This property is required. string - The HCFS URI of the script that contains SQL queries.
- query
List This property is required. QueryList Response - A list of queries.
This property is required. Sequence[str]- Optional. Presto client tags to attach to this query
- continue_
on_ failure This property is required. bool - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- output_
format This property is required. str - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query_
file_ uri This property is required. str - The HCFS URI of the script that contains SQL queries.
- query_
list This property is required. QueryList Response - A list of queries.
This property is required. List<String>- Optional. Presto client tags to attach to this query
- continue
On Failure This property is required. Boolean - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- output
Format This property is required. String - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties
This property is required. Map<String> - Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File Uri This property is required. String - The HCFS URI of the script that contains SQL queries.
- query
List This property is required. Property Map - A list of queries.
PySparkJobResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Python File Uri This property is required. string - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Python
File Uris This property is required. List<string> - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Python File Uri This property is required. string - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Python
File Uris This property is required. []string - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Python File Uri This property is required. String - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File Uris This property is required. List<String> - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Python File Uri This property is required. string - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File Uris This property is required. string[] - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
python_ file_ uri This property is required. str - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python_
file_ uris This property is required. Sequence[str] - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- main
Python File Uri This property is required. String - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File Uris This property is required. List<String> - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
QueryListResponse
- Queries
This property is required. List<string> - The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- Queries
This property is required. []string - The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries
This property is required. List<String> - The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries
This property is required. string[] - The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries
This property is required. Sequence[str] - The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries
This property is required. List<String> - The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
RegexValidationResponse
- Regexes
This property is required. List<string> - RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- Regexes
This property is required. []string - RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes
This property is required. List<String> - RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes
This property is required. string[] - RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes
This property is required. Sequence[str] - RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes
This property is required. List<String> - RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
ReservationAffinityResponse
- Consume
Reservation Type This property is required. string - Optional. Type of reservation to consume
- Key
This property is required. string - Optional. Corresponds to the label key of reservation resource.
- Values
This property is required. List<string> - Optional. Corresponds to the label values of reservation resource.
- Consume
Reservation Type This property is required. string - Optional. Type of reservation to consume
- Key
This property is required. string - Optional. Corresponds to the label key of reservation resource.
- Values
This property is required. []string - Optional. Corresponds to the label values of reservation resource.
- consume
Reservation Type This property is required. String - Optional. Type of reservation to consume
- key
This property is required. String - Optional. Corresponds to the label key of reservation resource.
- values
This property is required. List<String> - Optional. Corresponds to the label values of reservation resource.
- consume
Reservation Type This property is required. string - Optional. Type of reservation to consume
- key
This property is required. string - Optional. Corresponds to the label key of reservation resource.
- values
This property is required. string[] - Optional. Corresponds to the label values of reservation resource.
- consume_
reservation_ type This property is required. str - Optional. Type of reservation to consume
- key
This property is required. str - Optional. Corresponds to the label key of reservation resource.
- values
This property is required. Sequence[str] - Optional. Corresponds to the label values of reservation resource.
- consume
Reservation Type This property is required. String - Optional. Type of reservation to consume
- key
This property is required. String - Optional. Corresponds to the label key of reservation resource.
- values
This property is required. List<String> - Optional. Corresponds to the label values of reservation resource.
SecurityConfigResponse
- Kerberos
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Kerberos Config Response - Optional. Kerberos related configuration.
- Kerberos
Config This property is required. KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos
Config This property is required. KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos
Config This property is required. KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos_
config This property is required. KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos
Config This property is required. Property Map - Optional. Kerberos related configuration.
ShieldedInstanceConfigResponse
- Enable
Integrity Monitoring This property is required. bool - Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure Boot This property is required. bool - Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm This property is required. bool - Optional. Defines whether instances have the vTPM enabled.
- Enable
Integrity Monitoring This property is required. bool - Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure Boot This property is required. bool - Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm This property is required. bool - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity Monitoring This property is required. Boolean - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure Boot This property is required. Boolean - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm This property is required. Boolean - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity Monitoring This property is required. boolean - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure Boot This property is required. boolean - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm This property is required. boolean - Optional. Defines whether instances have the vTPM enabled.
- enable_
integrity_ monitoring This property is required. bool - Optional. Defines whether instances have integrity monitoring enabled.
- enable_
secure_ boot This property is required. bool - Optional. Defines whether instances have Secure Boot enabled.
- enable_
vtpm This property is required. bool - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity Monitoring This property is required. Boolean - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure Boot This property is required. Boolean - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm This property is required. Boolean - Optional. Defines whether instances have the vTPM enabled.
SoftwareConfigResponse
- Image
Version This property is required. string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components This property is required. List<string> - The set of optional components to activate on the cluster.
- Properties
This property is required. Dictionary<string, string> - Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- Image
Version This property is required. string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components This property is required. []string - The set of optional components to activate on the cluster.
- Properties
This property is required. map[string]string - Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version This property is required. String - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components This property is required. List<String> - The set of optional components to activate on the cluster.
- properties
This property is required. Map<String,String> - Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version This property is required. string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components This property is required. string[] - The set of optional components to activate on the cluster.
- properties
This property is required. {[key: string]: string} - Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_
version This property is required. str - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_
components This property is required. Sequence[str] - The set of optional components to activate on the cluster.
- properties
This property is required. Mapping[str, str] - Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version This property is required. String - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components This property is required. List<String> - The set of optional components to activate on the cluster.
- properties
This property is required. Map<String> - Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
SparkJobResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Class This property is required. string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar File Uri This property is required. string - The HCFS URI of the jar file that contains the main class.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Class This property is required. string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar File Uri This property is required. string - The HCFS URI of the jar file that contains the main class.
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class This property is required. String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar File Uri This property is required. String - The HCFS URI of the jar file that contains the main class.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class This property is required. string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar File Uri This property is required. string - The HCFS URI of the jar file that contains the main class.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
class This property is required. str - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_
jar_ file_ uri This property is required. str - The HCFS URI of the jar file that contains the main class.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- main
Class This property is required. String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar File Uri This property is required. String - The HCFS URI of the jar file that contains the main class.
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkRJobResponse
- Archive
Uris This property is required. List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. List<string> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
RFile Uri This property is required. string - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Archive
Uris This property is required. []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args
This property is required. []string - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris This property is required. []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
RFile Uri This property is required. string - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
RFile Uri This property is required. String - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris This property is required. string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. string[] - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main
RFile Uri This property is required. string - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_
uris This property is required. Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. Sequence[str] - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris This property is required. Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
r_ file_ uri This property is required. str - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris This property is required. List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args
This property is required. List<String> - Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris This property is required. List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- main
RFile Uri This property is required. String - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkSqlJobResponse
- Jar
File Uris This property is required. List<string> - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Logging
Config This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Properties
This property is required. Dictionary<string, string> - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains SQL queries.
- Query
List This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- Script
Variables This property is required. Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- Jar
File Uris This property is required. []string - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- Properties
This property is required. map[string]string - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- Query
File Uri This property is required. string - The HCFS URI of the script that contains SQL queries.
- Query
List This property is required. QueryList Response - A list of queries.
- Script
Variables This property is required. map[string]string - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- properties
This property is required. Map<String,String> - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File Uri This property is required. String - The HCFS URI of the script that contains SQL queries.
- query
List This property is required. QueryList Response - A list of queries.
- script
Variables This property is required. Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File Uris This property is required. string[] - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- properties
This property is required. {[key: string]: string} - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File Uri This property is required. string - The HCFS URI of the script that contains SQL queries.
- query
List This property is required. QueryList Response - A list of queries.
- script
Variables This property is required. {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_
file_ uris This property is required. Sequence[str] - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging_
config This property is required. LoggingConfig Response - Optional. The runtime log config for job execution.
- properties
This property is required. Mapping[str, str] - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query_
file_ uri This property is required. str - The HCFS URI of the script that contains SQL queries.
- query_
list This property is required. QueryList Response - A list of queries.
- script_
variables This property is required. Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File Uris This property is required. List<String> - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config This property is required. Property Map - Optional. The runtime log config for job execution.
- properties
This property is required. Map<String> - Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File Uri This property is required. String - The HCFS URI of the script that contains SQL queries.
- query
List This property is required. Property Map - A list of queries.
- script
Variables This property is required. Map<String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
TemplateParameterResponse
- Description
This property is required. string - Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Fields
This property is required. List<string> - Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- Name
This property is required. string - Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Validation
This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Parameter Validation Response - Optional. Validation rules to be applied to this parameter's value.
- Description
This property is required. string - Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Fields
This property is required. []string - Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- Name
This property is required. string - Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Validation
This property is required. ParameterValidation Response - Optional. Validation rules to be applied to this parameter's value.
- description
This property is required. String - Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields
This property is required. List<String> - Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name
This property is required. String - Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
This property is required. ParameterValidation Response - Optional. Validation rules to be applied to this parameter's value.
- description
This property is required. string - Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields
This property is required. string[] - Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name
This property is required. string - Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
This property is required. ParameterValidation Response - Optional. Validation rules to be applied to this parameter's value.
- description
This property is required. str - Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields
This property is required. Sequence[str] - Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name
This property is required. str - Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
This property is required. ParameterValidation Response - Optional. Validation rules to be applied to this parameter's value.
- description
This property is required. String - Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields
This property is required. List<String> - Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name
This property is required. String - Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
This property is required. Property Map - Optional. Validation rules to be applied to this parameter's value.
ValueValidationResponse
- Values
This property is required. List<string> - List of allowed values for the parameter.
- Values
This property is required. []string - List of allowed values for the parameter.
- values
This property is required. List<String> - List of allowed values for the parameter.
- values
This property is required. string[] - List of allowed values for the parameter.
- values
This property is required. Sequence[str] - List of allowed values for the parameter.
- values
This property is required. List<String> - List of allowed values for the parameter.
WorkflowTemplatePlacementResponse
- Cluster
Selector This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Cluster Selector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- Managed
Cluster This property is required. Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Managed Cluster Response - Optional. A cluster that is managed by the workflow.
- Cluster
Selector This property is required. ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- Managed
Cluster This property is required. ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster
Selector This property is required. ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster This property is required. ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster
Selector This property is required. ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster This property is required. ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster_
selector This property is required. ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed_
cluster This property is required. ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster
Selector This property is required. Property Map - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster This property is required. Property Map - Optional. A cluster that is managed by the workflow.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.