SET PROPERTY

Description

Syntax:

SET PROPERTY [FOR ‘user’] ‘key’ = ‘value’ [, ‘key’ = ‘value’]

Set user attributes, including resources allocated to users, import cluster, etc. The user attributes set here are for user, not user_identity. That is to say, if two users ‘jack’@’%’ and ‘jack’@’192%’are created through the CREATE USER statement, the SET PROPERTY statement can only be used for the jack user, not ‘jack’@’%’ or ‘jack’@’192%’

Importing cluster is only applicable to Baidu internal users.

key:

Super user rights: max_user_connections: Maximum number of connections. max_query_instances: Maximum number of query instance user can use when query. sql_block_rules: set sql block rules。After setting, if the query user execute match the rules, it will be rejected. cpu_resource_limit: limit the cpu resource usage of a query. See session variable cpu_resource_limit. exec_mem_limit: Limit the memory usage of the query. See the description of the session variable exec_mem_limit for details. -1 means not set. load_mem_limit: Limit memory usage for imports. See the introduction of the session variable load_mem_limit for details. -1 means not set. resource.cpu_share: cpu resource assignment.(Derepcated) Load_cluster. {cluster_name}. priority: assigns priority to a specified cluster, which can be HIGH or NORMAL resource_tags: Specify the user’s resource tag permissions.

Notice: The cpu_resource_limit, exec_mem_limit, and load_mem_limit properties default to the values in the session variables if they are not set.

Ordinary user rights: Quota.normal: Resource allocation at the normal level. Quota.high: Resource allocation at the high level. Quota.low: Resource allocation at low level.

Load_cluster. {cluster_name}. hadoop_palo_path: The Hadoop directory used by Palo needs to store ETL programs and intermediate data generated by ETL for Palo to import. After the import is completed, the intermediate data will be automatically cleaned up, and the ETL program will be automatically reserved for next use. Load_cluster. {cluster_name}. hadoop_configs: configuration of hadoop, where fs. default. name, mapred. job. tracker, hadoop. job. UGI must be filled in. Load_cluster. {cluster_name}. hadoop_port: Hadoop HDFS name node http} Default_load_cluster: The default import cluster.

example

  1. Modify the maximum number of user jacks to 1000 SET PROPERTY FOR ‘jack’ ‘max_user_connections’ = ‘1000’;

  2. Modify the cpu_share of user Jack to 1000 SET PROPERTY FOR ‘jack’ ‘resource.cpu_share’ = ‘1000’;

  3. Modify the weight of the normal group of Jack users Set property for ‘jack’’quota. normal’ = 400’;

  4. Add import cluster for user jack SET PROPERTY FOR ‘jack’ ‘load ‘cluster.{cluster name}.hadoop’ palo path’ =’/user /palo /palo path’, ‘load_cluster.{cluster_name}.hadoop_configs’ = ‘fs.default.name=hdfs://dpp.cluster.com:port;mapred.job.tracker=dpp.cluster.com:port;hadoop.job.ugi=user,password;mapred.job.queue.name=job_queue_name_in_hadoop;mapred.job.priority=HIGH;’;

  5. Delete the import cluster under user jack. SET PROPERTY FOR ‘jack’ ‘load_cluster.{cluster_name}’ = ‘’;

  6. Modify user jack’s default import cluster SET PROPERTY FOR ‘jack’ ‘default_load_cluster’ = ‘{cluster_name}’;

  7. Modify the cluster priority of user Jack to HIGH SET PROPERTY FOR ‘jack’ ‘load_cluster.{cluster_name}.priority’ = ‘HIGH’;

  8. Modify the maximum number of query instance for jack to 3000 SET PROPERTY FOR ‘jack’ ‘max_query_instances’ = ‘3000’;

  9. Modify the sql block rule for jack SET PROPERTY FOR ‘jack’ ‘sql_block_rules’ = ‘rule1, rule2’;

  10. Modify the cpu resource usage limit for jack SET PROPERTY FOR ‘jack’ ‘cpu_resource_limit’ = ‘2’;

  11. Modify user’s resource tag permission SET PROPERTY FOR ‘jack’ ‘resource_tags.location’ = ‘group_a, group_b’;

  12. modify the user’s query memory usage limit in bytes SET PROPERTY FOR ‘jack’ ‘exec_mem_limit’ = ‘2147483648’;

  13. modify the user’s import memory usage limit in bytes SET PROPERTY FOR ‘jack’ ‘load_mem_limit’ = ‘2147483648’;

keyword

SET, PROPERTY