mySQL

1. Concatenating the fields

SELECT CONCAT('http://www.abc.com?parentCategoryId=' ,CATEGORY_ID,'&categoryId=', RELATED_CATEGORY_ID ) FROM RELATED_CATEGORY WHERE CATEGORY_ID IN (SELECT CATEGORY_ID FROM CATALOG_TO_CATEGORY WHERE CATALOG_ID = 1 AND ACTIVE_FLAG='Y')

2.  ERROR 1148 (42000): The used command is not allowed with this MySQL version .

 LOAD DATA LOCAL INFILE 'users.txt' INTO TABLE  users FIELDS TERMINATED BY  ',' ENCLOSED BY '"'  LINES TERMINATED BY '\n' (userid,fistname);

Throws an error something like " ERROR 1148 (42000): The used command is not allowed with this MySQL version ." 


For me the mysql server was an RDS and I was trying to load data from another ec2 instance.  When I google around I got suggestions to restart mysqld as "mysql -u user -p --local-infile menagerie" , which was not possible as it was an RDS service.

To get it working I had to add following in /etc/mysql/my.cnf   under [client] section.

[client]
local-infile    = 1


3.  To get the result set without coulmn  headers and  pipes ( column separator)  in  use the following command

 mysql -B -h mysqlhost  -u root -p --disable-column-names -D dbname -e "select distinct userid from users"  

4.   You can update by joining two tables

UPDATE TABLE A, TABLE  B SET  A.COL1 = B.COL2  WHERE A.ID = B.F_ID

5.  You can define query specific escape character in mySQL.  It is used to escape pattern matching characters.

SELECT * FROM USERS WHERE TAG   LIKE 'ABC=%%'  ESCAPE '='

It will return users with tag is like 'ABC%'

6.  '%'  wildcard is used to match zero or more characters, while '_' is used to match only one character.

Very nice link explaining this  https://www.guru99.com/wildcards.html

7. You can  have a multi line string within mysql query  ..something like below where I am inserting the spark/scala  script in one column. Also see how single quotes inside the value is escaped.


INSERT INTO tab1(t1,t2,t3,t4,t5) VALUES
('T1', 'T2', 'import org.apache.spark.SparkContext 
import org.apache.spark.sql.SQLContext 
import org.slf4j.{Logger, LoggerFactory}
import java.util.Calendar

object FirstSQL {
  def run(sqlContext: SQLContext) = {
    println(Calendar.getInstance.getTime)
    val loadKey = "Key"
    
    var overwriteOrInto = "into";
    try {  
      for( i <- 1 to 10 ) {
        val counter=String.valueOf(i)
        
        if ( i == 1 ) overwriteOrInto = "overwrite" 
        else overwriteOrInto = "into" 
        
        val sql = String.format(
            " insert into another_table select \'fixed\',%s, col from tab3",counter)
          
          println(sql)
          sqlContext.setConf("spark.sql.shuffle.partitions", "10");
          sqlContext.sql(sql) 
      }     
      println(Calendar.getInstance.getTime)
      System.exit(0)        
    } catch {
      case e: Exception => e.printStackTrace(); System.exit(1)
    }
  }  
}

FirstSQL.run(sqlContext)', 'T4', 'T5' )


8. Never run DDL without  making sure autocommit is disabled.

set autocommit=0

9.   In place of DECODE in oracle, use IF in mysql

IF ( col1='abc', 'this', 'that')



Comments

Popular posts from this blog

SQL

Analytics

HIVE