multithreading - Different execution contexts and thread allocation with akka-http -


not sure if that's right stackexchange question

i've got akka-http application acts front heavy computation. requests handles vary in time takes process them. finish within 1 second, take more that. computation purely asynchronous, there's no await @ point, complete requests future, i.e.:

val spotsjsonf: future[string] = spotsf.map(spots => debugformatter.producejson(text, spots._1, spots._2, env))  complete(spotsjsonf.map { t => httpentity(contenttypes.`application/json`, t) }) 

my requirements/assumptions:

  • i need maximise parallelism, i.e. connections shouldn't rejected under heavy load
  • i can live (even small) requests taking longer if service busy
  • i can live extremely long requests timeouting under heavy load long don't affect parallelism after http request finished timeout.

to that, provided separate execution context (i.e. scala's default executioncontext.global) heavy computation, i.e. spawns , modifies futures on different thread pool 1 used akka http dispatcher. thought stop computation "sitting" on akka's threads, accept more connections. @ moment it's akka's default dispatcher (my reference.conf empty):

    "default-dispatcher": {       "attempt-teamwork": "on",       "default-executor": {         "fallback": "fork-join-executor"       },       "executor": "default-executor",       "fork-join-executor": {         "parallelism-factor": 3,         "parallelism-max": 64,         "parallelism-min": 8,         "task-peeking-mode": "fifo"       },       "mailbox-requirement": "",       "shutdown-timeout": "1s",       "thread-pool-executor": {         "allow-core-timeout": "on",         "core-pool-size-factor": 3,         "core-pool-size-max": 64,         "core-pool-size-min": 8,         "fixed-pool-size": "off",         "keep-alive-time": "60s",         "max-pool-size-factor": 3,         "max-pool-size-max": 64,         "max-pool-size-min": 8,         "task-queue-size": -1,         "task-queue-type": "linked"       },       "throughput": 5,       "throughput-deadline-time": "0ms",       "type": "dispatcher"     }, 

what happens though long running computation keeps executing long after akka has cancelled request due timeout. limited number of cores means number of rejected requests starts increasing though computation started overload no longer needed.

clearly, have no idea how manage threads in application.

what's best way satisfy requirements? several thread pools - good/bad idea? need explicitly cancel things? may using scala's vanilla future isn't best option @ point?

to me sounds isn't managing threads, isolating heavy work separate dispatcher have done, managing actual processing.

to able stop long running process mid work speak need split in smaller chunks can abort midstream if no longer needed.

a common pattern actors have processing actor either store result "so far" or send message, way can react "stop working" message in between or possibly check if has processed such long wall time should abort. message triggering work load example contain such timeout value allow "client" specify it.

(this pretty same thing dealing interruptedexception , thread.isinterrupted correctly in hand-threaded , blocking application)


Comments

Popular posts from this blog

sequelize.js - Sequelize group by with association includes id -

android - Robolectric "INTERNET permission is required" -

java - Android raising EPERM (Operation not permitted) when attempting to send UDP packet after network connection -