0

0

Moving data from mysql to cassandra_MySQL

php中文网

php中文网

发布时间:2016-06-01 13:07:24

|

1969人浏览过

|

来源于php中文网

原创

Moving data from mysql to cassandra_MySQL I had a relational database, that I wanted to migrate to cassandra. Cassandra's sstableloader provides option to load the existing data from flat files to a cassandra ring . Hence this can be used as a way to migrate data in relational databases to cassandra, as most relational databases let us export the data into flat files. 

sqoop gives the option to do this effectively. interestingly, datastax enterprise provides everything we want in the big data space as a package. this includes, cassandra, hadoop, hive , pig, sqoop, and mahout, which comes handy in this case.

Under the resources directory, you may find the cassandra, dse, hadoop, hive, log4j-appender, mahout, pig, solr, sqoop, and tomcat specific configurations.

For example, from resources/hadoop/bin, you may format the hadoop name node using

 ./hadoop namenode -format

as usual.

* Download and extract DataStax Enterprise binary archive (dse-2.1-bin.tar.gz).

* Follow the documentation , which is also available as a PDF .

* Migrating a relational database to cassandra is documented and is also blogged .

* Before starting DataStax, make sure that the JAVA_HOME is set. This also can be set directly on conf/hadoop-env.sh.

* Include the connector to the relational database into a location reachable by sqoop.

I put mysql-connector-java-5.1.12-bin.jar under resources/sqoop.

* Set the environment

$ bin/dse-env.sh

* Start DataStax Enterprise, as an Analytics node.

$ sudo   bin/dse cassandra -t

where cassandra starts the Cassandra process plus CassandraFS and the -t option starts the Hadoop JobTracker and TaskTracker processes.

if you start without the -t flag, the below exception will be thrown during the further operations that are discussed below.

No jobtracker found

Unable to run : jobtracker not found

 

Hence do not miss the -t flag.

* Start cassandra cli to view the cassandra keyrings and you will be able to view the data in cassandra, once you migrate using sqoop as given below.

$ bin/cassandra-cli  -host localhost -port 9160

Confirm that it is connected to the test cluster that is created on the port 9160, by the below from the CLI.

[default@unknown] describe cluster;

Cluster Information:

   Snitch: com.datastax.bdp.snitch.DseDelegateSnitch

   Partitioner: org.apache.cassandra.dht.RandomPartitioner

   Schema versions: 

f5a19a50-b616-11e1-0000-45b29245ddff: [127.0.1.1]

If you have missed mentioning the host/port (starting the cli by just bin/cassandra-cli ) or given it wrong, you will get the response as "Not connected to a cassandra instance."

$ bin/dse sqoop import --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --table Category --split-by categoryName --cassandra-keyspace shopping_cart_db --cassandra-column-family Category_cf --cassandra-row-key categoryName --cassandra-thrift-host localhost --cassandra-create-schema

Above command will now migrate the table "Category" in the shopping_cart_db with the primary key categoryName, into a cassandra keyspace named shopping_cart, with the cassandra row key categoryName. You may use the --direct mysql specific option, which is faster. In my above command, I have everything runs on localhost.

+--------------+-------------+------+-----+---------+-------+

| Field        | Type        | Null | Key | Default | Extra |

+--------------+-------------+------+-----+---------+-------+

| categoryName | varchar(50) | NO   | PRI | NULL    |       |

| description  | text        | YES  |     | NULL    |       |

| image        | blob        | YES  |     | NULL    |       |

+--------------+-------------+------+-----+---------+-------+

This also creates the respective java class (Category.java), inside the working directory.

To import all the tables in the database, instead of a single table.

$ bin/dse sqoop import-all-tables -m 1 --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --cassandra-thrift-host localhost --cassandra-create-schema --direct

Here "-m 1" tag ensures a sequential import. If not specified, the below exception will be thrown.

ERROR tool.ImportAllTablesTool: Error during import: No primary key could be found for table Category. Please specify one with --split-by or perform a sequential import with '-m 1'.

To check whether the keyspace is created,

[default@unknown] show keyspaces;

................

Keyspace: shopping_cart_db:

  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy

  Durable Writes: true

    Options: [replication_factor:1]

  Column Families:

    ColumnFamily: Category_cf

      Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

      Default column value validator: org.apache.cassandra.db.marshal.UTF8Type

      Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

      Row cache size / save period in seconds / keys to save : 0.0/0/all

      Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider

      Key cache size / save period in seconds: 200000.0/14400

      GC grace seconds: 864000

      Compaction min/max thresholds: 4/32

      Read repair chance: 1.0

      Replicate on write: true

      Bloom Filter FP chance: default

      Built indexes: []

      Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

.............

[default@unknown] describe shopping_cart_db;

Keyspace: shopping_cart_db:

  Replication Strategy: org.apache.cassandra.locator.SimpleStrategy

  Durable Writes: true

    Options: [replication_factor:1]

  Column Families:

    ColumnFamily: Category_cf

      Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

      Default column value validator: org.apache.cassandra.db.marshal.UTF8Type

      Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

      Row cache size / save period in seconds / keys to save : 0.0/0/all

      Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider

      Key cache size / save period in seconds: 200000.0/14400

      GC grace seconds: 864000

      Compaction min/max thresholds: 4/32

      Read repair chance: 1.0

      Replicate on write: true

      Bloom Filter FP chance: default

      Built indexes: []

      Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

You may also use hive to view the databases created in cassandra, in an sql-like manner.

* Start Hive

$ bin/dse hive

hive> show databases; 

OK

default

shopping_cart_db

When the entire database is imported as above, separate java classes will be created for each of the tables.

$ bin/dse sqoop import-all-tables -m 1 --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --cassandra-thrift-host localhost --cassandra-create-schema --direct

12/06/15 15:42:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.

12/06/15 15:42:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.

12/06/15 15:42:11 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Category` AS t LIMIT 1

12/06/15 15:42:11 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Category.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:13 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Category.jar

12/06/15 15:42:13 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:13 INFO mapreduce.ImportJobBase: Beginning import of Category

12/06/15 15:42:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

12/06/15 15:42:15 INFO mapred.JobClient: Running job: job_201206151241241_0007

12/06/15 15:42:16 INFO mapred.JobClient:  map 0% reduce 0%

12/06/15 15:42:25 INFO mapred.JobClient:  map 100% reduce 0%

12/06/15 15:42:25 INFO mapred.JobClient: Job complete: job_201206151241241_0007

12/06/15 15:42:25 INFO mapred.JobClient: Counters: 18

12/06/15 15:42:25 INFO mapred.JobClient:   Job Counters 

12/06/15 15:42:25 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6480

12/06/15 15:42:25 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:25 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:25 INFO mapred.JobClient:     Launched map tasks=1

12/06/15 15:42:25 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:25 INFO mapred.JobClient:   File Output Format Counters 

12/06/15 15:42:25 INFO mapred.JobClient:     Bytes Written=2848

12/06/15 15:42:25 INFO mapred.JobClient:   FileSystemCounters

12/06/15 15:42:25 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21419

12/06/15 15:42:25 INFO mapred.JobClient:     CFS_BYTES_WRITTEN=2848

12/06/15 15:42:25 INFO mapred.JobClient:     CFS_BYTES_READ=87

12/06/15 15:42:25 INFO mapred.JobClient:   File Input Format Counters 

12/06/15 15:42:25 INFO mapred.JobClient:     Bytes Read=0

12/06/15 15:42:25 INFO mapred.JobClient:   Map-Reduce Framework

12/06/15 15:42:25 INFO mapred.JobClient:     Map input records=1

12/06/15 15:42:25 INFO mapred.JobClient:     Physical memory (bytes) snapshot=119435264

12/06/15 15:42:25 INFO mapred.JobClient:     Spilled Records=0

12/06/15 15:42:25 INFO mapred.JobClient:     CPU time spent (ms)=630

12/06/15 15:42:25 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241241600

12/06/15 15:42:25 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2085318656

12/06/15 15:42:25 INFO mapred.JobClient:     Map output records=36

12/06/15 15:42:25 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87

12/06/15 15:42:25 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 11.4492 seconds (0 bytes/sec)

12/06/15 15:42:25 INFO mapreduce.ImportJobBase: Retrieved 36 records.

12/06/15 15:42:25 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:25 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customer` AS t LIMIT 1

12/06/15 15:42:25 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Customer.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:25 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Customer.jar

12/06/15 15:42:26 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:26 INFO mapreduce.ImportJobBase: Beginning import of Customer

12/06/15 15:42:26 INFO mapred.JobClient: Running job: job_201206151241241_0008

12/06/15 15:42:27 INFO mapred.JobClient:  map 0% reduce 0%

12/06/15 15:42:35 INFO mapred.JobClient:  map 100% reduce 0%

12/06/15 15:42:35 INFO mapred.JobClient: Job complete: job_201206151241241_0008

12/06/15 15:42:35 INFO mapred.JobClient: Counters: 17

12/06/15 15:42:35 INFO mapred.JobClient:   Job Counters 

12/06/15 15:42:35 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6009

12/06/15 15:42:35 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:35 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:35 INFO mapred.JobClient:     Launched map tasks=1

12/06/15 15:42:35 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:35 INFO mapred.JobClient:   File Output Format Counters 

12/06/15 15:42:35 INFO mapred.JobClient:     Bytes Written=0

12/06/15 15:42:35 INFO mapred.JobClient:   FileSystemCounters

12/06/15 15:42:35 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21489

12/06/15 15:42:35 INFO mapred.JobClient:     CFS_BYTES_READ=87

12/06/15 15:42:35 INFO mapred.JobClient:   File Input Format Counters 

12/06/15 15:42:35 INFO mapred.JobClient:     Bytes Read=0

Shrink.media
Shrink.media

Shrink.media是当今市场上最快、最直观、最智能的图像文件缩减工具

下载

12/06/15 15:42:35 INFO mapred.JobClient:   Map-Reduce Framework

12/06/15 15:42:35 INFO mapred.JobClient:     Map input records=1

12/06/15 15:42:35 INFO mapred.JobClient:     Physical memory (bytes) snapshot=164855808

12/06/15 15:42:35 INFO mapred.JobClient:     Spilled Records=0

12/06/15 15:42:35 INFO mapred.JobClient:     CPU time spent (ms)=510

12/06/15 15:42:35 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241241600

12/06/15 15:42:35 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2082869248

12/06/15 15:42:35 INFO mapred.JobClient:     Map output records=0

12/06/15 15:42:35 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87

12/06/15 15:42:35 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.3143 seconds (0 bytes/sec)

12/06/15 15:42:35 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:42:35 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `OrderEntry` AS t LIMIT 1

12/06/15 15:42:35 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderEntry.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderEntry.jar

12/06/15 15:42:36 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:36 INFO mapreduce.ImportJobBase: Beginning import of OrderEntry

12/06/15 15:42:36 INFO mapred.JobClient: Running job: job_201206151241241_0009

12/06/15 15:42:37 INFO mapred.JobClient:  map 0% reduce 0%

12/06/15 15:42:45 INFO mapred.JobClient:  map 100% reduce 0%

12/06/15 15:42:45 INFO mapred.JobClient: Job complete: job_201206151241241_0009

12/06/15 15:42:45 INFO mapred.JobClient: Counters: 17

12/06/15 15:42:45 INFO mapred.JobClient:   Job Counters 

12/06/15 15:42:45 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6381

12/06/15 15:42:45 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:45 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:45 INFO mapred.JobClient:     Launched map tasks=1

12/06/15 15:42:45 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:45 INFO mapred.JobClient:   File Output Format Counters 

12/06/15 15:42:45 INFO mapred.JobClient:     Bytes Written=0

12/06/15 15:42:45 INFO mapred.JobClient:   FileSystemCounters

12/06/15 15:42:45 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21569

12/06/15 15:42:45 INFO mapred.JobClient:     CFS_BYTES_READ=87

12/06/15 15:42:45 INFO mapred.JobClient:   File Input Format Counters 

12/06/15 15:42:45 INFO mapred.JobClient:     Bytes Read=0

12/06/15 15:42:45 INFO mapred.JobClient:   Map-Reduce Framework

12/06/15 15:42:45 INFO mapred.JobClient:     Map input records=1

12/06/15 15:42:45 INFO mapred.JobClient:     Physical memory (bytes) snapshot=137252864

12/06/15 15:42:45 INFO mapred.JobClient:     Spilled Records=0

12/06/15 15:42:45 INFO mapred.JobClient:     CPU time spent (ms)=520

12/06/15 15:42:45 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241241600

12/06/15 15:42:45 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2014703616

12/06/15 15:42:45 INFO mapred.JobClient:     Map output records=0

12/06/15 15:42:45 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87

12/06/15 15:42:45 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2859 seconds (0 bytes/sec)

12/06/15 15:42:45 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:42:45 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `OrderItem` AS t LIMIT 1

12/06/15 15:42:45 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderItem.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:45 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderItem.jar

12/06/15 15:42:46 WARN manager.CatalogQueryManager: The table OrderItem contains a multi-column primary key. Sqoop will default to the column orderNumber only for this job.

12/06/15 15:42:46 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:46 INFO mapreduce.ImportJobBase: Beginning import of OrderItem

12/06/15 15:42:46 INFO mapred.JobClient: Running job: job_201206151241241_0010

12/06/15 15:42:47 INFO mapred.JobClient:  map 0% reduce 0%

12/06/15 15:42:55 INFO mapred.JobClient:  map 100% reduce 0%

12/06/15 15:42:55 INFO mapred.JobClient: Job complete: job_201206151241241_0010

12/06/15 15:42:55 INFO mapred.JobClient: Counters: 17

12/06/15 15:42:55 INFO mapred.JobClient:   Job Counters 

12/06/15 15:42:55 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5949

12/06/15 15:42:55 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:55 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:55 INFO mapred.JobClient:     Launched map tasks=1

12/06/15 15:42:55 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:55 INFO mapred.JobClient:   File Output Format Counters 

12/06/15 15:42:55 INFO mapred.JobClient:     Bytes Written=0

12/06/15 15:42:55 INFO mapred.JobClient:   FileSystemCounters

12/06/15 15:42:55 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21524

12/06/15 15:42:55 INFO mapred.JobClient:     CFS_BYTES_READ=87

12/06/15 15:42:55 INFO mapred.JobClient:   File Input Format Counters 

12/06/15 15:42:55 INFO mapred.JobClient:     Bytes Read=0

12/06/15 15:42:55 INFO mapred.JobClient:   Map-Reduce Framework

12/06/15 15:42:55 INFO mapred.JobClient:     Map input records=1

12/06/15 15:42:55 INFO mapred.JobClient:     Physical memory (bytes) snapshot=116674560

12/06/15 15:42:55 INFO mapred.JobClient:     Spilled Records=0

12/06/15 15:42:55 INFO mapred.JobClient:     CPU time spent (ms)=590

12/06/15 15:42:55 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241241600

12/06/15 15:42:55 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2014703616

12/06/15 15:42:55 INFO mapred.JobClient:     Map output records=0

12/06/15 15:42:55 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87

12/06/15 15:42:55 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2539 seconds (0 bytes/sec)

12/06/15 15:42:55 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:42:55 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:55 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Payment` AS t LIMIT 1

12/06/15 15:42:55 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Payment.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Payment.jar

12/06/15 15:42:56 WARN manager.CatalogQueryManager: The table Payment contains a multi-column primary key. Sqoop will default to the column orderNumber only for this job.

12/06/15 15:42:56 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:56 INFO mapreduce.ImportJobBase: Beginning import of Payment

12/06/15 15:42:56 INFO mapred.JobClient: Running job: job_201206151241241_0011

12/06/15 15:42:57 INFO mapred.JobClient:  map 0% reduce 0%

12/06/15 15:43:05 INFO mapred.JobClient:  map 100% reduce 0%

12/06/15 15:43:05 INFO mapred.JobClient: Job complete: job_201206151241241_0011

12/06/15 15:43:05 INFO mapred.JobClient: Counters: 17

12/06/15 15:43:05 INFO mapred.JobClient:   Job Counters 

12/06/15 15:43:05 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5914

12/06/15 15:43:05 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:43:05 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:43:05 INFO mapred.JobClient:     Launched map tasks=1

12/06/15 15:43:05 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0

12/06/15 15:43:05 INFO mapred.JobClient:   File Output Format Counters 

12/06/15 15:43:05 INFO mapred.JobClient:     Bytes Written=0

12/06/15 15:43:05 INFO mapred.JobClient:   FileSystemCounters

12/06/15 15:43:05 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21518

12/06/15 15:43:05 INFO mapred.JobClient:     CFS_BYTES_READ=87

12/06/15 15:43:05 INFO mapred.JobClient:   File Input Format Counters 

12/06/15 15:43:05 INFO mapred.JobClient:     Bytes Read=0

12/06/15 15:43:05 INFO mapred.JobClient:   Map-Reduce Framework

12/06/15 15:43:05 INFO mapred.JobClient:     Map input records=1

12/06/15 15:43:05 INFO mapred.JobClient:     Physical memory (bytes) snapshot=137998336

12/06/15 15:43:05 INFO mapred.JobClient:     Spilled Records=0

12/06/15 15:43:05 INFO mapred.JobClient:     CPU time spent (ms)=520

12/06/15 15:43:05 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241241600

12/06/15 15:43:05 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2082865152

12/06/15 15:43:05 INFO mapred.JobClient:     Map output records=0

12/06/15 15:43:05 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87

12/06/15 15:43:05 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2642 seconds (0 bytes/sec)

12/06/15 15:43:05 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:43:05 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:43:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Product` AS t LIMIT 1

12/06/15 15:43:06 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Product.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:43:06 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/ Product.jar

12/06/15 15:43:06 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:43:06 INFO mapreduce.ImportJobBase: Beginning import of Product

12/06/15 15:43:07 INFO mapred.JobClient: Running job: job_201206151241241_0012

12/06/15 15:43:08 INFO mapred.JobClient:  map 0% reduce 0%

12/06/15 15:43:16 INFO mapred.JobClient:  map 100% reduce 0%

12/06/15 15:43:16 INFO mapred.JobClient: Job complete: job_201206151241241_0012

12/06/15 15:43:16 INFO mapred.JobClient: Counters: 18

12/06/15 15:43:16 INFO mapred.JobClient:   Job Counters 

12/06/15 15:43:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=5961

12/06/15 15:43:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:43:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:43:16 INFO mapred.JobClient:     Launched map tasks=1

12/06/15 15:43:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0

12/06/15 15:43:16 INFO mapred.JobClient:   File Output Format Counters 

12/06/15 15:43:16 INFO mapred.JobClient:     Bytes Written=248262

12/06/15 15:43:16 INFO mapred.JobClient:   FileSystemCounters

12/06/15 15:43:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21527

12/06/15 15:43:16 INFO mapred.JobClient:     CFS_BYTES_WRITTEN=248262

12/06/15 15:43:16 INFO mapred.JobClient:     CFS_BYTES_READ=87

12/06/15 15:43:16 INFO mapred.JobClient:   File Input Format Counters 

12/06/15 15:43:16 INFO mapred.JobClient:     Bytes Read=0

12/06/15 15:43:16 INFO mapred.JobClient:   Map-Reduce Framework

12/06/15 15:43:16 INFO mapred.JobClient:     Map input records=1

12/06/15 15:43:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=144871424

12/06/15 15:43:16 INFO mapred.JobClient:     Spilled Records=0

12/06/15 15:43:16 INFO mapred.JobClient:     CPU time spent (ms)=1030

12/06/15 15:43:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=121241241600

12/06/15 15:43:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2085318656

12/06/15 15:43:16 INFO mapred.JobClient:     Map output records=300

12/06/15 15:43:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87

12/06/15 15:43:16 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2613 seconds (0 bytes/sec)

12/06/15 15:43:16 INFO mapreduce.ImportJobBase: Retrieved 300 records.

I found DataStax an interesting project to explore more. I have blogged on the issues that I faced on this as a learner, and how easily can they be fixed - Issues that you may encounter during the migration to Cassandra using DataStax/Sqoop and the fixes

.

本站声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn

热门AI工具

更多
DeepSeek
DeepSeek

幻方量化公司旗下的开源大模型平台

豆包大模型
豆包大模型

字节跳动自主研发的一系列大型语言模型

通义千问
通义千问

阿里巴巴推出的全能AI助手

腾讯元宝
腾讯元宝

腾讯混元平台推出的AI助手

文心一言
文心一言

文心一言是百度开发的AI聊天机器人,通过对话可以生成各种形式的内容。

讯飞写作
讯飞写作

基于讯飞星火大模型的AI写作工具,可以快速生成新闻稿件、品宣文案、工作总结、心得体会等各种文文稿

即梦AI
即梦AI

一站式AI创作平台,免费AI图片和视频生成。

ChatGPT
ChatGPT

最最强大的AI聊天机器人程序,ChatGPT不单是聊天机器人,还能进行撰写邮件、视频脚本、文案、翻译、代码等任务。

相关专题

更多
Golang处理数据库错误教程合集
Golang处理数据库错误教程合集

本专题整合了Golang数据库错误处理方法、技巧、管理策略相关内容,阅读专题下面的文章了解更多详细内容。

39

2026.02.06

java多线程方法汇总
java多线程方法汇总

本专题整合了java多线程面试题、实现函数、执行并发相关内容,阅读专题下面的文章了解更多详细内容。

17

2026.02.06

1688阿里巴巴货源平台入口与批发采购指南
1688阿里巴巴货源平台入口与批发采购指南

本专题整理了1688阿里巴巴批发进货平台的最新入口地址与在线采购指南,帮助用户快速找到官方网站入口,了解如何进行批发采购、货源选择以及厂家直销等功能,提升采购效率与平台使用体验。

289

2026.02.06

快手网页版入口与电脑端使用指南 快手官方短视频观看入口
快手网页版入口与电脑端使用指南 快手官方短视频观看入口

本专题汇总了快手网页版的最新入口地址和电脑版使用方法,详细提供快手官网直接访问链接、网页端操作教程,以及如何无需下载安装直接观看短视频的方式,帮助用户轻松浏览和观看快手短视频内容。

150

2026.02.06

C# 多线程与异步编程
C# 多线程与异步编程

本专题深入讲解 C# 中多线程与异步编程的核心概念与实战技巧,包括线程池管理、Task 类的使用、async/await 异步编程模式、并发控制与线程同步、死锁与竞态条件的解决方案。通过实际项目,帮助开发者掌握 如何在 C# 中构建高并发、低延迟的异步系统,提升应用性能和响应速度。

11

2026.02.06

Python 微服务架构与 FastAPI 框架
Python 微服务架构与 FastAPI 框架

本专题系统讲解 Python 微服务架构设计与 FastAPI 框架应用,涵盖 FastAPI 的快速开发、路由与依赖注入、数据模型验证、API 文档自动生成、OAuth2 与 JWT 身份验证、异步支持、部署与扩展等。通过实际案例,帮助学习者掌握 使用 FastAPI 构建高效、可扩展的微服务应用,提高服务响应速度与系统可维护性。

7

2026.02.06

JavaScript 异步编程与事件驱动架构
JavaScript 异步编程与事件驱动架构

本专题深入讲解 JavaScript 异步编程与事件驱动架构,涵盖 Promise、async/await、事件循环机制、回调函数、任务队列与微任务队列、以及如何设计高效的异步应用架构。通过多个实际示例,帮助开发者掌握 如何处理复杂异步操作,并利用事件驱动设计模式构建高效、响应式应用。

11

2026.02.06

java连接字符串方法汇总
java连接字符串方法汇总

本专题整合了java连接字符串教程合集,阅读专题下面的文章了解更多详细操作。

47

2026.02.05

java中fail含义
java中fail含义

本专题整合了java中fail的含义、作用相关内容,阅读专题下面的文章了解更多详细内容。

29

2026.02.05

热门下载

更多
网站特效
/
网站源码
/
网站素材
/
前端模板

精品课程

更多
相关推荐
/
热门推荐
/
最新课程
关于我们 免责申明 举报中心 意见反馈 讲师合作 广告合作 最新更新
php中文网:公益在线php培训,帮助PHP学习者快速成长!
关注服务号 技术交流群
PHP中文网订阅号
每天精选资源文章推送

Copyright 2014-2026 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号