This repository was archived by the owner on Sep 18, 2023. It is now read-only.
UnsatisfiedLinkError while reading the dataframe #1139
Unanswered
baratamavinash225
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Team,
I am getting the below error while reading the dataframe.
java.lang.UnsatisfiedLinkError: 'long org.apache.arrow.dataset.file.JniWrapper.createOrcFileFormat(java.lang.String[])'
Here is the code snippet that i am running
spark.catalog.createTable("web_site", "arrow", Map("path" -> path, "originalFormat" -> "orc"))
val path = "file:///Users/avinashbaratam/Downloads/userdata1.parquet" val df = spark.read.option(ArrowOptions.KEY_ORIGINAL_FORMAT, "parquet").option(ArrowOptions.KEY_FILESYSTEM, "filesystem").format("arrow").load(path)
I have tried both Parquet & ORC, also I have tried both HDFS and local filesystem.
My Spark Shell command looks like below -
**spark-shell --driver-java-options "--add-exports java.base/sun.nio.ch=ALL-UNNAMED" \
Here is my entire StackTrace -
java.lang.UnsatisfiedLinkError: 'long org.apache.arrow.dataset.file.JniWrapper.createOrcFileFormat(java.lang.String[])'
at org.apache.arrow.dataset.file.JniWrapper.createOrcFileFormat(Native Method)
at org.apache.arrow.dataset.file.format.OrcFileFormat.createOrcFileFormat(OrcFileFormat.java:44)
at org.apache.arrow.dataset.file.format.OrcFileFormat.(OrcFileFormat.java:29)
at org.apache.arrow.dataset.file.format.OrcFileFormat.create(OrcFileFormat.java:35)
at com.intel.oap.spark.sql.execution.datasources.v2.arrow.ArrowUtils$.getFormat(ArrowUtils.scala:120)
at com.intel.oap.spark.sql.execution.datasources.v2.arrow.ArrowUtils$.makeArrowDiscovery(ArrowUtils.scala:76)
at com.intel.oap.spark.sql.execution.datasources.v2.arrow.ArrowUtils$.readSchema(ArrowUtils.scala:47)
at com.intel.oap.spark.sql.execution.datasources.v2.arrow.ArrowUtils$.readSchema(ArrowUtils.scala:60)
at com.intel.oap.spark.sql.execution.datasources.arrow.ArrowFileFormat.convert(ArrowFileFormat.scala:59)
at com.intel.oap.spark.sql.execution.datasources.arrow.ArrowFileFormat.inferSchema(ArrowFileFormat.scala:65)
at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$11(DataSource.scala:210)
at scala.Option.orElse(Option.scala:447)
at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:207)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:411)
at org.apache.spark.sql.execution.command.CreateDataSourceTableCommand.run(createDataSourceTables.scala:79)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecu
Beta Was this translation helpful? Give feedback.
All reactions