DataFlow native concurrency has been implemented in SynapseGrid
As already have been discussed there are a few possible ways to run the StaticSystem (synaptic grid) on a multicore processor. The one that was implemented was running subsystems inside Akka actors.
However, this is not the finest grained concurrency possible. The whole subsystem will run inside a single thread (the current actor's thread) and it can become a bottle neck for other subsystems.
What is the least element that can run in parallel? For SynapseGrid it is a function that connects two contacts. If it is pure function then it can safely run in a separate thread.
Special care should be taken when dealing with states. The access to states should be serialized according to the principle "happen-before". We cannot read the state value before all computations that could influence it have completed. And no other computation that requires this state can start until we complete this computation. However, all the other computations can run in parallel. In particular if two computations use different states then they can run in parallel.
Recently this fine-grained approach has been implemented in SynapseGrid (synapse-grid-concurrent module). To run a system in some ExecutionContext one converts it to a parallel SimpleSignalProcessor:
import scala.concurrent.ExecutionContext. Implicits.global
val f = system.toStaticSystem. toParallelSimpleSignalProcessor. toMapTransducer(input, output)
val y = f(x)
That's it. Now the whole system runs with fine-grained parallelism. To compare that the results are absolutely the same, the same system can also be converted to a traditional single-threaded version:
val g = system.toStaticSystem. toSimpleSignalProcessor. toMapTransducer(input, output)
val y2 = g(x)
assert(y === y2)
The concurrency implementation guarantees that the results will be the same.
The tests for the module contains examples of how it works.
The dependency for the synapse-grid-concurrent module:
"ru.primetalk" % "synapse-grid-concurrent" % "1.3.0"
The source code is published on GitHub.
However, this is not the finest grained concurrency possible. The whole subsystem will run inside a single thread (the current actor's thread) and it can become a bottle neck for other subsystems.
What is the least element that can run in parallel? For SynapseGrid it is a function that connects two contacts. If it is pure function then it can safely run in a separate thread.
Special care should be taken when dealing with states. The access to states should be serialized according to the principle "happen-before". We cannot read the state value before all computations that could influence it have completed. And no other computation that requires this state can start until we complete this computation. However, all the other computations can run in parallel. In particular if two computations use different states then they can run in parallel.
Recently this fine-grained approach has been implemented in SynapseGrid (synapse-grid-concurrent module). To run a system in some ExecutionContext one converts it to a parallel SimpleSignalProcessor:
import scala.concurrent.ExecutionContext. Implicits.global
val f = system.toStaticSystem. toParallelSimpleSignalProcessor. toMapTransducer(input, output)
val y = f(x)
That's it. Now the whole system runs with fine-grained parallelism. To compare that the results are absolutely the same, the same system can also be converted to a traditional single-threaded version:
val g = system.toStaticSystem. toSimpleSignalProcessor. toMapTransducer(input, output)
val y2 = g(x)
assert(y === y2)
The concurrency implementation guarantees that the results will be the same.
The tests for the module contains examples of how it works.
The dependency for the synapse-grid-concurrent module:
"ru.primetalk" % "synapse-grid-concurrent" % "1.3.0"
The source code is published on GitHub.
Ярлыки: Concurrency, Dataflow concurrency, Dataflow programming, Functional Reactive Programming, Scala, SynapseGrid