EGTM worker pool manager process.
In principle, it is similar to standard Erlang
and since we use GT.M via NIF interface, we have to run
each worker in a separate Erlang process to make it some sense.
This is done via
slave module that "forks" off slave
nodes based on configuration
[egtm, workers, nodes], the
array of nodenames to start.
So this pool forms a small Erlang distribution cluster restricted to run on a single host with the same GT.M database.
For more advanced configurations with multiple hosts, replication, intelliroute, and other advanced features, take a look at EGTM/Cluster product.
Keep in mind that since pool uses EPMD-based RPC between
slave ErlVM nodes, it is always slower than single-worker
configuration. Basic test shows about 10-time slowdown.
That's the reason why egtm_pool is disabled by default
and to use it, you have to compile EGTM with
EGTM_POOL_ENABLED, you can configure pool by setting
|handle_info/2||Some of our nodes has died, let's start it back!|
|perform/2||Call a core-EGTM operation |
code_change(OldVsn, State, Extra) -> any()
handle_call(Request, From, State) -> any()
handle_cast(Msg, State) -> any()
handle_info(Info, State) -> any()
Some of our nodes has died, let's start it back!
init(Args) -> any()
perform(Operation, Args) -> any()
Call a core-EGTM operation
Args via one of
in this pool.
start_link() -> any()
terminate(Reason, State) -> any()
Generated by EDoc, Aug 19 2012, 02:27:27.