ubuntu - Executors lost when starting pyspark in YARN client mode -


i able run pyspark in yarn client mode in laptop , trying setup in laptop. however, time can't running.

when try start pyspark in yarn client mode, gives me following error. using dynamic resource allocation, have set spark_executor_memory less yarn container memory. using hadoop 2.6.4, spark 1.6.1, ubuntu 15.10

is possible error due network issues?

16/06/12 01:49:34 info scheduler.dagscheduler: executor lost: 1 (epoch 0) in [1]: 16/06/12 01:49:34 info cluster.yarnclientschedulerbackend: disabling executor 1. 16/06/12 01:49:34 info storage.blockmanagermasterendpoint: trying remove executor 1 blockmanagermaster. 16/06/12 01:49:34 info storage.blockmanagermasterendpoint: removing block manager blockmanagerid(1, 192.168.2.16, 37900) 16/06/12 01:49:34 error client.transportclient: failed send rpc 9123554941984942265 192.168.2.16/192.168.2.16:47630: java.nio.channels.closedchannelexception java.nio.channels.closedchannelexception 16/06/12 01:49:34 info storage.blockmanagermaster: removed 1 in removeexecutor 16/06/12 01:49:34 warn cluster.yarnschedulerbackend$yarnschedulerendpoint: attempted executor loss reason executor id 1 @ rpc address 192.168.2.16:47640, got no response. marking slave     lost. java.io.ioexception: failed send rpc 9123554941984942265 192.168.2.16/192.168.2.16:47630: java.nio.channels.closedchannelexception     @ org.apache.spark.network.client.transportclient$3.operationcomplete(transportclient.java:239)     @ org.apache.spark.network.client.transportclient$3.operationcomplete(transportclient.java:226)     @ io.netty.util.concurrent.defaultpromise.notifylistener0(defaultpromise.java:680)     @ io.netty.util.concurrent.defaultpromise.notifylisteners(defaultpromise.java:567)     @ io.netty.util.concurrent.defaultpromise.tryfailure(defaultpromise.java:424)     @ io.netty.channel.abstractchannel$abstractunsafe.safesetfailure(abstractchannel.java:801)     @ io.netty.channel.abstractchannel$abstractunsafe.write(abstractchannel.java:699)     @ io.netty.channel.defaultchannelpipeline$headcontext.write(defaultchannelpipeline.java:1122)     @ io.netty.channel.abstractchannelhandlercontext.invokewrite(abstractchannelhandlercontext.java:633)     @ io.netty.channel.abstractchannelhandlercontext.access$1900(abstractchannelhandlercontext.java:32)     @ io.netty.channel.abstractchannelhandlercontext$abstractwritetask.write(abstractchannelhandlercontext.java:908)     @ io.netty.channel.abstractchannelhandlercontext$writeandflushtask.write(abstractchannelhandlercontext.java:960)     @ io.netty.channel.abstractchannelhandlercontext$abstractwritetask.run(abstractchannelhandlercontext.java:893)     @ io.netty.util.concurrent.singlethreadeventexecutor.runalltasks(singlethreadeventexecutor.java:357)     @ io.netty.channel.nio.nioeventloop.run(nioeventloop.java:357)     @ io.netty.util.concurrent.singlethreadeventexecutor$2.run(singlethreadeventexecutor.java:111)     @ java.lang.thread.run(thread.java:745) caused by: java.nio.channels.closedchannelexception 16/06/12 01:49:34 error cluster.yarnscheduler: lost executor 1 on 192.168.2.16: slave lost 16/06/12 01:49:34 info cluster.yarnclientschedulerbackend: disabling executor 2. 16/06/12 01:49:34 info scheduler.dagscheduler: executor lost: 2 (epoch 1) 16/06/12 01:49:34 info storage.blockmanagermasterendpoint: trying remove executor 2 blockmanagermaster. 16/06/12 01:49:34 error client.transportclient: failed send rpc 8690255566269835148 192.168.2.16/192.168.2.16:47630: java.nio.channels.closedchannelexception java.nio.channels.closedchannelexception 16/06/12 01:49:34 info storage.blockmanagermasterendpoint: removing block manager blockmanagerid(2, 192.168.2.16, 41124) 16/06/12 01:49:34 info storage.blockmanagermaster: removed 2 in removeexecutor 16/06/12 01:49:34 warn cluster.yarnschedulerbackend$yarnschedulerendpoint: attempted executor loss reason executor id 2 @ rpc address 192.168.2.16:47644, got no response. marking slave     lost. java.io.ioexception: failed send rpc 8690255566269835148 192.168.2.16/192.168.2.16:47630: java.nio.channels.closedchannelexception     @ org.apache.spark.network.client.transportclient$3.operationcomplete(transportclient.java:239)     @ org.apache.spark.network.client.transportclient$3.operationcomplete(transportclient.java:226)     @ io.netty.util.concurrent.defaultpromise.notifylistener0(defaultpromise.java:680)     @ io.netty.util.concurrent.defaultpromise.notifylisteners(defaultpromise.java:567)     @ io.netty.util.concurrent.defaultpromise.tryfailure(defaultpromise.java:424)     @ io.netty.channel.abstractchannel$abstractunsafe.safesetfailure(abstractchannel.java:801)     @ io.netty.channel.abstractchannel$abstractunsafe.write(abstractchannel.java:699)     @ io.netty.channel.defaultchannelpipeline$headcontext.write(defaultchannelpipeline.java:1122)     @ io.netty.channel.abstractchannelhandlercontext.invokewrite(abstractchannelhandlercontext.java:633)     @ io.netty.channel.abstractchannelhandlercontext.access$1900(abstractchannelhandlercontext.java:32)     @ io.netty.channel.abstractchannelhandlercontext$abstractwritetask.write(abstractchannelhandlercontext.java:908)     @ io.netty.channel.abstractchannelhandlercontext$writeandflushtask.write(abstractchannelhandlercontext.java:960)     @ io.netty.channel.abstractchannelhandlercontext$abstractwritetask.run(abstractchannelhandlercontext.java:893)     @ io.netty.util.concurrent.singlethreadeventexecutor.runalltasks(singlethreadeventexecutor.java:357)     @ io.netty.channel.nio.nioeventloop.run(nioeventloop.java:357)     @ io.netty.util.concurrent.singlethreadeventexecutor$2.run(singlethreadeventexecutor.java:111)     @ java.lang.thread.run(thread.java:745) 


Comments

Popular posts from this blog

sequelize.js - Sequelize group by with association includes id -

android - Robolectric "INTERNET permission is required" -

java - Android raising EPERM (Operation not permitted) when attempting to send UDP packet after network connection -