Cassandra Spark Dataframe, CSV : "query in the SELECT clause of the INSERT INTO/OVERWRITE statement generates the same number of columns as its schema -


i have csv file around 100 cols. wanted put in table 101 cols (it 102 cols)

the problem is have following message:org.apache.spark.sql.cassandra.cassandrasourcerelation requires query in select clause of insert into/overwrite statement generates same number of columns schema.

how can overcome problem?

here code:

  df =  sqlcontext.read()                   .format("csv")                   .option("delimiter", ";")                   .option("header", "true")                   .load("file:///" + namefile); 

and then:

df.repartition(8).select("col1","col2",..."col100").write().mode(savemode.append).saveastable("mytable"); 


Comments

Popular posts from this blog

networking - Vagrant-provisioned VirtualBox VM is not reachable from Ubuntu host -

c# - ASP.NET Core - There is already an object named 'AspNetRoles' in the database -

ruby on rails - ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true -