How to connect to spark (CDH-5.8 docker vms at remote)? Do I need to map port 7077 at container? -


currently, can access hdfs inside application, i'd - instead of running local spark - use cloudera's spark enabled in cloudera manager.

righ have hdfs defined @ core-site.xml, , run app (--master) yarn. don't need set machine address hdfs files. in way, spark job runs locally , not in "cluster." don't want now. when try set --master [namenode]:[port] not connect. wonder if i'm directing correct port, or if have map port @ docker container. or if i'm missing yarn setup.

additionally, i've been testing snappydata (inc) solution spark sql in-memory database. goal run snappy jvms locally, redirecting spark jobs vm cluster. whole idea here test performance against hadoop implementation. solution not final product (if snappy local, , spark "really" remote, believe won't efficient - in scenario, bring snappy jvms same cluster..)

thanks in advance!


Comments

Popular posts from this blog

resizing Telegram inline keyboard -

command line - How can a Python program background itself? -

php - "cURL error 28: Resolving timed out" on Wordpress on Azure App Service on Linux -