2015-08-03

The Spark Notebook from creator Kate Matsudaira

I received my Spark Notebook. What is the Spark Notebook, you may ask.
The Spark Notebook combines form and function. The Spark Notebook project raised funding on Kickstarter too.

The way I see it, the Spark Notebook is an agenda (a 6-month agenda) with additional features. These features include (from the guide) the yearly planning pages, accomplishments, the monthly planning pages, the weekly planning pages, the inspiration pages, and the note pages. The monthly planning pages include something called the 30-day challenge. According to creator Kate Matsudaira, the 30-day challenge feature is useful to help start (or break) a habit.

The Spark Notebook comes with a guide. On this website, some of the text is white with a background image which makes it hard to read. For example, consider the screenshot below.







How did I learn about the existence of the Spark Notebook ?

One of my hobbies consists in watching videos on the Amazon Web Services (AWS) YouTube channel. I like these videos because they are usually easy to understand although the subject may be a bit abstract.

In particular, I watch the videos about AWS products. One example of this is the video called Introduction to Amazon S3 which explains in 3 minutes the general idea of the Internet storage service called S3 (Simple Storage Service).

The other types of videos that I like on the AWS YouTube channel are those about satisfied AWS customers. The customer experience videos are usually animated by AWS Chief Evangelist Jeff Barr.

One of the videos I watched was Kate Matsudaira, CTO of Decide.com. I like the customer experience videos because they usually mention what the customer is doing with AWS (value proposition / business model) and also the AWS building blocks that the customer is using to make things happen are also listed.

So, in that video, I got to know more about Decide.com. Then, I continued my adventure. I searched for Decide.com to try their offering. However, I was not able to do so because Decide.com was acquired in 2013.

The next logical step was to look for the next accomplishment of Kate Matsudaira because I was not able to experiment with Decide.com because it had been acquired. That's when I found popforms.

The company popforms provides "bite-size career development
for the modern leader" through online courses. These courses are called sparks. I found this interesting, because I think self-reflection is important for improving who we want to be.

This company (popforms) was also acquired by a bigger fish (Safari Books Online).

At that point, I was thinking that this entrepreneur probably has a secret sauce, and that perhaps she shared it in the form of a book. This is how I discovered the Spark Notebook.

Also, when I discussed the Spark Notebook concept with my significant other, she said that she has been reading Kate Matsudaira's blog for years. So, Kate Matsudaira is an entrepreneur, technologist, creator, and also a role model.

I think that the Spark Notebook is a great product.

Product: Spark Notebook
Price: $US 28.00
Purchase links: Manufacturer or Amazon
Score: 9/10

Pros:
- innovative form, impressive function
- compact size
- self-contained / self-explanatory
- online guide at http://www.thesparknotebook.com/guide
Cons:
- expensive for a 6-month agenda
- only 6 months

2015-07-24

The convergence of HPC and cloud computing

An exascale computer is an imaginary system that can sustain one exaflops (10^18 floating point operations per second.) Such an object is needed in science and engineering, mostly for simulating virtual versions of objects found in the real world, such as proteins, planes, and cities. Important requirements for such a computer are 1) memory bandwidth, 2) floating point operation throughput, 3) low network latency, and so on.

2 of the many challenges for possibly having exascale supercomputers by 2020 are 1) improving fault-tolerance and 2) lowering energy consumption. (see "No Exascale for You!" An Interview with Berkeley Lab's Horst Simon).

One typical solution to implement fault tolerance in HPC is the use of the checkpoint/restart cycle whereas in most cloud technologies fault tolerance is instead implemented using different principles/abstractions such as load balancing and replication (see the CAP theorem). The checkpoint/restart can not work at the exa scale because there will almost always be a failing component at this scale. So, an exascale computation would need to survive such failures. In that regard, Facebook is a very large system that is fault-tolerant and that is based on cloud technologies rather than HPC.

The fact that fault tolerance has been figured out for a while now in cloud technologies allowed the cloud community to solve other important problems. One active area of development in cloud computing in 2015 has been without a doubt that of orchestration and provisioning. HPC is still making progress on solving the fault-tolerance problem in the HPC context.

Abstractions


A significant body of research output is coming endlessly from UC Berkeley's AMPLab and other research groups and also from Internet companies (Google, Facebook, Amazon, Microsoft, and others). The "cloud stack" (see all the Apache projects, like Spark, Mesos, ZooKeeper, Cassandra) is covering a significant part of today's market needs (datacenter abstraction, distributed databases, map-reduce abstractions at scale). What I mean here is that anyone can get started very quickly with all these off-the-shelf components, typically using high levels of abstractions (such as Spark's Resilient Distributed Datasets or RDD). Further, in addition to having these off-the-shelf building blocks available, they can be deployed very easily in various cloud environments, whereas this is rarely the case in HPC.

One observation that can be made is that HPC always want the highest processing speed, usually on bare metal. This low level of abstraction comes with the convenience that things are built on a very low number of abstractions (typically MPI libraries and job schedulers).

On the other hand, abstractions abound in the cloud world. Things are evolving much faster in the cloud than in HPC. (see "HPC is dying, and MPI is killing it").



But... I need a fast network for my HPC workflow

One thing that is typically associated to HPC and not with the cloud is the concept of having a very fast network. But this fast-network gap is closing, and the cloud is catching on in that regard. Recently, Microsoft added RDMA in Windows Azure. Thus, now the cloud technically offers a low latency (in microseconds) and high bandwidth (40 Gbps). This is no longer an exclusive feature of HPC.

The network is the computer

In the end, as Sun Microsystems's John Cage said, "The Network is the Computer."  The HPC stack is already converging to what is being found in the web/cloud/big data stack (see this piece). There are significant advances in cloud networking too (such as software-defined networks, convenient/automated network provisioning, and performance improvements. So, the prediction that can perhaps be made today is that HPC and cloud will no longer be 2 solitudes in a not-so-distant future. HPC will benefit from the cloud and vice-versa.

What the future hold in this ongoing convergence will be very exciting.


References
-----------------

Daniel A. Reed, Jack Dongarra
Exascale Computing and Big Data
Communications of the ACM, Vol. 58 No. 7, Pages 56-68, 10.1145/2699414
http://cacm.acm.org/magazines/2015/7/188732-exascale-computing-and-big-data/fulltext
This survey paper is very comprehensive and highlights how HPC (called exascale computing even though there is no operational exascale computer as of today) and cloud can meet at the crossroads.

Tiffany Trader
Fastest Supercomputer Runs Ubuntu, OpenStack
HPCwire  May 27, 2014
http://www.hpcwire.com/2014/05/27/fastest-supercomputer-runs-ubuntu-openstack/

This article reports on a very large supercomputer that is running OpenStack instead of the classic HPC schedulers (like MOAB, SGE, Cobalt, Maui).

Jonathan Dursi
HPC is dying, and MPI is killing it
R&D computing at scale, 2015-04-03
http://www.dursi.ca/hpc-is-dying-and-mpi-is-killing-it/

This piece is a provocative, yet realistic, depiction of the current state of popularity of various hpc and cloud technologies (surveyed using Google Trends).

2014-10-17

Profiling the Thorium actor model engine with LTTng UST

Thorium is an actor model engine in C 1999. It uses MPI and Pthreads.

The latency (in Thorium) when sending small messages between actors recently came to my attention.

In this post, LTTng-UST is used to generate actor message delivery paths annotated with time deltas in each step.

Perf


I have been working with perf for a while now, but found it only useful mostly for hardware counters.

I typically use the following command to record events with perf. Note that ($thread is the Linux LWP (lightweight process) thread number.

perf record -g \
    -e cache-references,cache-misses,cpu-cycles,ref-cycles,instructions,branch-instructions,branch-misses \
    -t $thread -o $thread.perf.data

 

 As far as I know, perf can not trace things like message delivery paths in userspace.

Tracing with LTTng-UST

 
This week, I started to read about tracepoints (perf does support "Tracepoint Events"). In particular, I wanted to use tracepoints to understand some erratic behaviors in Thorium.

LTTng-UST, the Linux Trace Toolkit Next Generation Userspace Tracer, has quite a long name.

A friend of mine (Francis Giraldeau) is a researcher in the field of tracing. He helped me getting started with LTTng-UST. I also got some help from the lttng-dev mailing list.

The data model for defining and storing tracepoint events (called CTF or Common Trace Format) is probably the big difference between LTTng and the other tracers. The LWN papers (part 1 and part 2) about LTTng are very interesting too and they discuss the CTF format.

According to the LTTng documentation (which is great by the way), tracing is done today in 3 easy steps:
  1. Instrument (add tracepoints in the source code);
  2. Trace (run the application while recording tracepoint events);
  3. Investigate (analyze tracepoint data using various techniques).

In the BIOSAL project, we already have (primitive) tracepoints, but the data gathered is basically analyzed (printed or discarded) in real time, which is probably a bad idea in the first place.

Regardless of where the tracepoint data go, we are using the same semantic as the one in LTTng-UST (LTTng was the inspiration). We insert a tracepoint in our code with something like this:

    /* trace message:actor_send events */
    thorium_tracepoint(message, actor_send, message);


Here is a little explanation: message (the first argument) is the provider, actor_send is the event name, and message (the third argument) is the data that we want to submit for tracing.


Adding my first tracepoint


To try LTTng, I added a define (#define thorium_tracepoint tracepoint) to use LTTng's tracepoint().

I also added a tracepoint in thorium_actor_send (actor.c) with the line below.

tracepoint(hello_world, my_first_tracepoint, name * name, "x^2");


I also added a rule for engine/thorium/tracepoints/lttng/hello-tp.o
in the Makefile (option: THORIUM_USE_LTTNG).

I then started the Spate application, and ran lttng list.

[boisvert@bigmem biosal]$ lttng list --userspace
Spawning a session daemon
UST events:
-------------
None





I ran it again since the LTTng daemon was not running.




[boisvert@bigmem biosal]$ lttng list --userspace
UST events:
-------------

PID: 23298 - Name: ./applications/spate_metagenome_assembler/spate
      ust_baddr_statedump:soinfo (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
      hello_world:my_first_tracepoint (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)

PID: 23300 - Name: ./applications/spate_metagenome_assembler/spate
      ust_baddr_statedump:soinfo (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
      hello_world:my_first_tracepoint (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)

PID: 23299 - Name: ./applications/spate_metagenome_assembler/spate
      ust_baddr_statedump:soinfo (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
      hello_world:my_first_tracepoint (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)

PID: 23297 - Name: ./applications/spate_metagenome_assembler/spate
      ust_baddr_statedump:soinfo (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
      hello_world:my_first_tracepoint (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)



Great. This is very simple to use.


Tracing inside a LTTng session


Now, I am ready to do actual tracing (with the tracepoint "hello_world:my_first_tracepoint"). The first thing to do now is to create a session.

[boisvert@bigmem biosal]$ lttng create my-userspace-session
Session my-userspace-session created.
Traces will be written in /home/boisvert/lttng-traces/my-userspace-session-20141017-154714


Then, I enable all userspace tracepoints for my session and start tracing.


[boisvert@bigmem biosal]$ lttng enable-event --userspace --all
All UST events are enabled in channel channel0


[boisvert@bigmem biosal]$ lttng start
Tracing started for session my-userspace-session


Then I run my application.

[boisvert@bigmem biosal]$ mpiexec -n 4 ./applications/spate_metagenome_assembler/spate -k 51 -threads-per-node 8 ~/dropbox/S.aureus.fasta.gz


Once my application terminates, I stopped tracing and destroyed my session.

[boisvert@bigmem biosal]$ lttng stop
Waiting for data availability.
Tracing stopped for session my-userspace-session
[boisvert@bigmem biosal]$
[boisvert@bigmem biosal]$ lttng destroy
Session my-userspace-session destroyed


Tracepoint data files were written in a directory in my home.

[boisvert@bigmem biosal]$ ls ~/lttng-traces/
my-userspace-session-20141017-154714



There are now 135 tracepoint files (all related to "channel0" for this single LTTng tracepoint session).

[boisvert@bigmem biosal]$ find ~/lttng-traces/my-userspace-session-20141017-154714/|wc -l
135


So far, I did step 1) Instrument and step 2) Trace. The third and last step is 3) Investigate.

One easy way to inspect LTTng traces is a tool named babeltrace. By default, this tool dumps the events in ASCII text in the standard output.

[boisvert@bigmem ~]$ babeltrace ~/lttng-traces/my-userspace-session-20141017-154714/ | head
[15:50:56.347344203] (+?.?????????) bigmem hello_world:my_first_tracepoint: { cpu_id = 38 }, { my_string_field = "x^2", my_integer_field = -689379607 }
[15:50:56.347520299] (+0.000176096) bigmem hello_world:my_first_tracepoint: { cpu_id = 39 }, { my_string_field = "x^2", my_integer_field = -695379712 }
[15:50:56.347582196] (+0.000061897) bigmem hello_world:my_first_tracepoint: { cpu_id = 38 }, { my_string_field = "x^2", my_integer_field = -691379644 }
[15:50:56.347698898] (+0.000116702) bigmem hello_world:my_first_tracepoint: { cpu_id = 36 }, { my_string_field = "x^2", my_integer_field = -695379712 }
[15:50:56.347765853] (+0.000066955) bigmem hello_world:my_first_tracepoint: { cpu_id = 38 }, { my_string_field = "x^2", my_integer_field = -693379679 }
[15:50:56.348014813] (+0.000248960) bigmem hello_world:my_first_tracepoint: { cpu_id = 7 }, { my_string_field = "x^2", my_integer_field = -695379712 }
[15:50:56.348053162] (+0.000038349) bigmem hello_world:my_first_tracepoint: { cpu_id = 38 }, { my_string_field = "x^2", my_integer_field = -683379484 }
[15:50:56.348172916] (+0.000119754) bigmem hello_world:my_first_tracepoint: { cpu_id = 46 }, { my_string_field = "x^2", my_integer_field = -695379712 }
[15:50:56.348228416] (+0.000055500) bigmem hello_world:my_first_tracepoint: { cpu_id = 38 }, { my_string_field = "x^2", my_integer_field = -683379484 }
[15:50:56.348305385] (+0.000076969) bigmem hello_world:my_first_tracepoint: { cpu_id = 46 }, { my_string_field = "x^2", my_integer_field = -695379712 }






Let's create useful tracepoints now.


Concrete tracepoints in Thorium



Below is a list of tracepoints in the delivery path of any message in Thorium.

- thorium_message:actor_send
- thorium_message:worker_send
- thorium_message:worker_send_enqueue
- thorium_message:node_send
- thorium_message:node_send_system
- thorium_message:node_send_dispatch
- thorium_message:node_dispatch_message
- thorium_message:worker_pool_enqueue
- thorium_message:transport_send
- thorium_message:transport_receive
- thorium_message:node_receive
- thorium_message:worker_receive
- thorium_message:actor_receive


Here is a delivery trace for a message delivered within a node (suspicious time delta is shown in red).

[boisvert at bigmem biosal]$ babeltrace  ~/lttng-traces/auto-20141018-104939/ |grep "message_number = 8000," | awk '{print $4" "$2}'|sed 's/: / /g'|sed 's/(//g'|sed 's/)//g'
thorium_message:actor_send +0.000010835
thorium_message:worker_send +0.000000876
thorium_message:worker_enqueue_message +0.000001410
thorium_message:worker_dequeue_message +0.003886256  
thorium_message:worker_pool_dequeue +0.000001434
thorium_message:node_send +0.000001438
thorium_message:node_send_system +0.000001006
thorium_message:node_dispatch_message +0.000001231
thorium_message:worker_pool_enqueue +0.000001827
thorium_message:node_send_dispatch +0.000001100
thorium_message:worker_receive +0.000003944
thorium_message:actor_receive +0.000003028

A message sent between nodes also shows this high delta (4 ms).

[boisvert at bigmem biosal]$ babeltrace  ~/lttng-traces/auto-20141018-104939/ |grep "message_number = 8800," | awk '{print $4" "$2}'|sed 's/: / /g'|sed 's/(//g'|sed 's/)//g'
thorium_message:actor_send +0.004115537
thorium_message:worker_send +0.000001402
thorium_message:worker_enqueue_message +0.000001457
thorium_message:worker_dequeue_message +0.004062639  
thorium_message:worker_pool_dequeue +0.000001536
thorium_message:node_send +0.000000934
thorium_message:node_send_system +0.000001185
thorium_message:transport_send +0.000001039
thorium_message:node_receive +0.000061961
thorium_message:node_dispatch_message +0.000001502
thorium_message:worker_pool_enqueue +0.000002299
thorium_message:worker_receive +0.000001632
thorium_message:actor_receive +0.000003461


Conclusion


According to the LTTng traces, the way messages are dequeued in Thorium is not very efficient.


Edit 2014-10-18: added actual tracepoints
Edit 2014-10-18: fixed format

2014-08-28

Profiling an high-performance actor application for metagenomics


I am currently in an improvement phase where I break, build and improve various components of the system.


The usual way of doing things is to have a static view of one node among all the nodes inside an actor computation. The graphs look like this:


512x16


1024x16



1536x16



2048x16




But with 2048 nodes, the one single selected node may not be an accurate representation of what is going on. This is why, using Thorium profiles, we are generating 3D graphs instead. They look like this:



512x16


1024x16


1536x16


2048x16


2014-08-02

The public datasets from the DOE/JGI Great Prairie Soil Metagenome Grand Challenge



I am working on a couple of very large public metagenomics datasets from the Department of Energy (DOE) Joint Genome Institute (JGI). These datasets were produced in the context of the Grand Challenge program.

Professor Janet Jansson was the Principal Investigator for the proposal named Great Prairie Soil Metagenome Grand Challenge ( Proposal ID: 949 ).


Professor C. Titus Brown wrote a blog article about this Grand Challenge.
Moreover, the Brown research group published at least one paper using these Grand Challenge datasets (assembly with digital normalization and partitioning).

Professor James Tiedje presented the Great Challenge at the 2012 Metagenomics Workshop.

Alex Copeland presented interesting work at Sequencing, Finishing and Analysis in the Future (SFAF) in 2012 related to this Grand Challenge.



Jansson's Grand Challenge included 12 projects. Below I made a list with colors (one color for the sample site and one for the type of soil).

  1. Great Prairie Soil Metagenome Grand Challenge: Kansas, Cultivated corn soil metagenome reference core (402463)
  2. Great Prairie Soil Metagenome Grand Challenge: Kansas, Native Prairie metagenome reference core (402464)
  3. Great Prairie Soil Metagenome Grand Challenge: Kansas, Native Prairie metagenome reference core (402464) (I don't know why it's listed twice)
  4. Great Prairie Soil Metagenome Grand Challenge: Kansas soil pyrotag survey (402466)
  5. Great Prairie Soil Metagenome Grand Challenge: Iowa, Continuous corn soil metagenome reference core (402461)
  6. Great Prairie Soil Metagenome Grand Challenge: Iowa, Native Prairie soil metagenome reference core (402462)
  7. Great Prairie Soil Metagenome Grand Challenge: Iowa soil pyrotag survey (402465)
  8. Great Prairie Soil Metagenome Grand Challenge: Wisconsin, Continuous corn soil metagenome reference core (402460)
  9. Great Prairie Soil Metagenome Grand Challenge: Wisconsin, Native Prairie soil metagenome reference core (402459)
  10. Great Prairie Soil Metagenome Grand Challenge: Wisconsin, Restored Prairie soil metagenome reference core (402457)
  11. Great Prairie Soil Metagenome Grand Challenge: Wisconsin, Switchgrass soil metagenome reference core (402458)
  12. Great Prairie Soil Metagenome Grand Challenge: Wisconsin soil pyrotag survey (402456)

I thank the Jansson research group for making these datasets public so that I don't have to look further for large politics-free metagenomics datasets.


Table 1: number of files, reads, and bases in the Grand Challenge datasets. Most of the sequences are paired reads.
Dataset
File count
Read count
Base count
Iowa_Continuous_Corn_Soil (details)
252 055 601 258196 708 830 076
Iowa_Native_Prairie_Soil (details)253 750 844 486326 986 888 235
Kansas_Cultivated_Corn_Soil (details)302 677 222 281272 276 185 410
Kansas_Native_Prairie_Soil (details)335 126 775 452597 933 511 278
Wisconsin_Continuous_Corn_Soil (details)181 912 865 700192 128 891 088
Wisconsin_Native_Prairie_Soil (details)202 098 317 886211 016 377 208
Wisconsin_Restored_Prairie_Soil (details)6347 778 67052 514 579 170
Wisconsin_Switchgrass_Soil (details)7448 382 76658 323 428 574
Total16418 417 788 4991 907 888 691 039


At Argonne we are using these datasets to develop a next-generation metagenomics assembler named "Spate" built on top of the Thorium actor engine. The word spate means a large number of similar things or events appearing or occurring in quick succession. With the actor model, every single message is an active message. Active messages are very neat and there is a lot of them with the actor model.


Similar posts:

2014-08-01

The Thorium actor engine is operational now, we can start to work on actor applications for metagenomics

I have been very busy during the last months. In particular, I completed my doctorate on April 10th, 2014 and we moved from Canada to the United States on April 15th, 2014. I started a new occupation on April 21st, 2014 at Argonne National Laboratory (a U.S. Department of Energy laboratory).

But the biggest change, perhaps, was not one listed in the enumeration above. The biggest change was to stop working on Ray. Ray is built on top of RayPlatform, which in turn uses MPI for the parallelism and distribution. But this approach is not an easy way of devising applications because message passing alone is a very leaky, not self-contained, abstraction. Ray usually works fine, but it has some bugs.

The problem with leaky abstractions is that they lack simplicity and are way too complex to scale out.

For example, it is hard to add new code to an existing code base without breaking anything. This is the case because MPI only offers a fixed number of ranks. Sure, the MPI standard has some features to spawn ranks, but it's not supported on most platforms and when it is ranks are spawned as operating system processes.

There are arguably 3 known methods to reduce the number of bugs. First is to (1) write a lot of tests. But it's better if you can have a lower number of bugs in the first place. The second one is to use pure (2) functional programming. The third is to use the (3) actor model.

If you look at what the industry is doing, Erlang, Scala (and perhaps D) use the actor model of computation. The actor model of computation was introduced by the legendary (that's my opinion) Carl Hewitt in two seminal papers (Hewitt, Bishop, Steiger 1973 and Hewitt and Baker 1977).

Erlang is cooler than Scala (this is an opinion, not a fact) because it enforces both the actor model and functional programming whereas Scala (arguably) does not enforce anything.

The real thing, perhaps, is to apply the actor model to high-performance computing. In particular, I am applying it to metagenomics because there is a lot of data. For example, Janet Jansson and her team generated huge datasets in 2011 in the context of a Grand Challenge.

So basically I started to work on biosal (biological sequence analysis library) on May 22th, 2014. The initial momentum for the SAL concept (Sequence Analysis Library) was created in 2012 at a workshop. So far, at least two projects (that I am aware of) are related to this workshop: KMI (Kmer Matching Interface) and biosal.

The biosal team is small: we are currently 6 people and we are only 2 that are pushing code.

Here is the current team:







Person (alphabetical order) Roles in biosal project
Pavan Balaji
  • MPI consultant
Sébastien Boisvert
  • Master branch owner
  • Actor model enthusiast
  • Metagenomics person
  • Scrum master
Huy Bui
  • PAMI consultant
  • Communication consultant
Rick Stevens
  • Supervisor
  • Metagenomics person
  • Stakeholder
  • Product owner
  • Exascale computing enthusiast
Venkatram Vishwanath
  • Actor model enthusiast
  • Exascale computing enthusiast
Fangfang Xia
  • Product manager
  • Actor model enthusiast
  • Metagenomics person





When I started to implement the runtime system in biosal, I did not plan to give a name to that component. But I changed my mind because the code is general and very cool. It is a distributed actor engine in C 1999, MPI 1.0, and Pthreads and it's named Thorium (like the atom).

Thorium uses the actor model, but does not use functional programming.

It is quite easy to get started with this. It is a two step process.

The first step is to create an actor script (3 C functions called init, destroy and receive). For a given actor script, you need to write 2 files (a H header file and a C implementation file).

The first step defines a actor script structure like this:

struct bsal_script hello_script = {
    .name = HELLO_SCRIPT,
    .init = hello_init,
    .destroy = hello_destroy,
    .receive = hello_receive,
    .size = sizeof(struct hello),
    .description = "hello"
};
The prototype for the 3 functions are:



Function Concrete actor function
init
void hello_init(struct bsal_actor *actor);
destroy
void hello_destroy(struct bsal_actor *self);
receive
void hello_receive(struct bsal_actor *self,
   struct bsal_message *message);


The functions init and destroy are called automatically by Thorium when an actor is spawned and killed, respectively. The function receive is called automatically by Thorium when the actor receives a message. Sending messages is the only way to interact with an actor.

There is only one (very simple) way to send a message to an actor:

void bsal_actor_send(struct bsal_actor *self, int destination, struct bsal_message *message);


The second step is to create a Thorium runtime node in a C file with a main function (around 10 lines).

After creating the code in two easy steps, you just need to compile and link the code.

After that, you can perform actor computations anywhere. A typical command to do so is:

mpiexec -n 1024 ./hello_world -threads-per-node 32
 
 

Obviously, you need more than just one actor script to actually something cool with actors.


On a final note, biosal is an object-oriented project. The current object is typically called self, like in Swift, Ruby, and Smalltalk.

2014-07-21

Is it required to use different priority in a high-performance actor system ?



I was reading a log file from an actor computation. In particular, I was looking at the outcome of a kmer counting computation performed with Argonnite, which runs on top of Thorium. Argonnite is an application in the BIOSAL project and Thorium is the engine of the BIOSAL project (which means that all BIOSAL applications run on top of Thorium).




In BIOSAL, everything is an actor or a message. And these are handled by the Thorium engine. Thorium is a distributed engine. A computation with Thorium is distributed across BIOSAL runtime nodes. Each node has 1 pacing thread and 1 bunch of worker threads (for example, with 32 threads, you get 1 pacing thread and 31 workers).




Each worker is responsible for a subset of the actors that live inside a given BIOSAL node. Obviously, you want each worker to have their own actors to keep every worker busy. Each worker has a scheduling queue with 4 priorities: max, high, normal, and low (these are the priority used by the Erlang ERTS called BEAM). An actor with max priority always wins. Otherwise, there is a ratio of N*N*N to N*N to N between high, normal, and low. This ratio protects against starvation.

In the current code, every actor is classified in normal by default.

If I put every actor in the same priority (BSAL_PRIORITY_NORMAL), I see this (for node 0 and worker 5) when I run a actor computation on one single physical machine (no latency when passing messages around):

node/0 worker/5 SchedulingQueue Levels: 4
node/0 worker/5 scheduling_queue: Priority Queue 1048576 (BSAL_PRIORITY_MAX), actors: 0
node/0 worker/5 scheduling_queue: Priority Queue 128 (BSAL_PRIORITY_HIGH), actors: 0
node/0 worker/5 scheduling_queue: Priority Queue 64 (BSAL_PRIORITY_NORMAL), actors: 4
node/0 worker/5 [0] actor aggregator/1291935834 (1 messages)
node/0 worker/5 [1] actor kmer_store/1477943366 (511 messages)
node/0 worker/5 [2] actor aggregator/443747990 (1 messages)
node/0 worker/5 [3] actor aggregator/710261816 (1 messages)
node/0 worker/5 scheduling_queue: Priority Queue 4 (BSAL_PRIORITY_LOW), actors: 0
node/0 worker/5 SchedulingQueue... completed report !


Arguably, actor 1477943366 should be classified in a higher priority than NORMAL (such as HIGH or MAX). But is it required ? I think, at least in this case, that the answer is no. Here is the reason.


The only thing that counts at the end of the day is that you want to waste CPU cycles. As long as CPU cycles are not wasted (called efficiency), the order (that's a partial order right there) of events is unimportant as long as no worker starve (remember, wasting CPU cycles is like wasting money: it's bad.).

Below are the load values across the actor system for an actor computation that lasted  17 minutes and that had an efficiency of 94% (the computation wasted around 6% of the CPU cycles, which is not bad, but not perfect neither).

At the beginning, there is some I/O, which waits for the magnetic disk. So CPU cycles are wasted.

Then, actors flow their messages at full capacity which is shown with a load between 0.99 and 1.00.

Then at the end the load drops a little because of work scarcity.

The first step is the data counting. In this computation, there were only one data file. So only one worker is busy from 0 seconds to 45 seconds.

LOAD EPOCH 0 s node/0 0.00/31 (0.00) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 5 s node/0 0.05/31 (0.00) 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 10 s node/0 0.96/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.96 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 15 s node/0 0.98/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 20 s node/0 0.98/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 25 s node/0 0.98/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 30 s node/0 0.98/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 35 s node/0 0.98/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 40 s node/0 0.98/31 (0.03) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
LOAD EPOCH 45 s node/0 3.79/31 (0.12) 0.00 0.00 0.00 0.00 0.22 0.22 0.22 0.23 0.24 0.22 0.23 0.00 0.00 0.22 0.98 0.00 0.00 0.00 0.00 0.00 0.26 0.00 0.26 0.27 0.00 0.00 0.00 0.00 0.00 0.22 0.00


Data distribution from input_stream actors to the sequence_store actors happens between 50 seconds and 60 seconds.

LOAD EPOCH 50 s node/0 11.04/31 (0.36) 0.73 0.74 0.77 0.21 0.22 0.22 0.22 0.23 0.24 0.22 0.23 0.74 0.21 0.73 0.73 0.27 0.26 0.26 0.27 0.26 0.26 0.27 0.26 0.27 0.26 0.26 0.26 0.27 0.71 0.22 0.22
LOAD EPOCH 55 s node/0 24.96/31 (0.81) 0.73 0.74 0.77 0.73 0.73 0.75 0.74 0.75 0.76 0.73 0.75 0.72 0.67 0.73 0.73 0.93 0.91 0.92 0.91 0.91 0.91 0.92 0.92 0.93 0.91 0.91 0.90 0.76 0.71 0.74 0.74
LOAD EPOCH 60 s node/0 25.65/31 (0.83) 0.71 0.71 0.72 0.71 0.72 0.72 0.71 1.00 0.72 0.72 0.73 0.72 0.71 0.72 0.72 1.00 0.96 0.98 0.98 0.98 0.96 0.98 0.98 1.00 0.95 0.99 0.96 0.75 0.71 0.72 0.72


From 65 seconds to 950 seconds (most of the computation), the load of every worker reported by the Thorium runtime system is 99% or 100%. This is good enough.

LOAD EPOCH 65 s node/0 30.87/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00
LOAD EPOCH 70 s node/0 30.90/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 0.99 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 75 s node/0 30.84/31 (0.99) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 1.00 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 80 s node/0 30.91/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 85 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 90 s node/0 30.84/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 95 s node/0 30.85/31 (1.00) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 1.00 0.99 1.00 1.00 1.00 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 100 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 105 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 110 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 115 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 120 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 0.99 1.00 0.99 1.00 0.99 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 0.99 1.00 0.99 1.00 1.00
LOAD EPOCH 125 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 130 s node/0 30.83/31 (0.99) 0.99 0.99 1.00 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 135 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00
LOAD EPOCH 140 s node/0 30.91/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 145 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 150 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 155 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 160 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 165 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 170 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 175 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 180 s node/0 30.87/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 0.99 0.95 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 0.99 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 185 s node/0 30.77/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.95 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 190 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 195 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 200 s node/0 30.82/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 205 s node/0 30.83/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 210 s node/0 30.84/31 (0.99) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 215 s node/0 30.86/31 (1.00) 1.00 1.00 0.99 1.00 0.99 1.00 0.99 1.00 1.00 1.00 1.00 0.99 1.00 1.00 0.99 1.00 0.99 1.00 1.00 0.99 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 0.99
LOAD EPOCH 220 s node/0 30.88/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 225 s node/0 30.91/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 230 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 235 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 240 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 245 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 250 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 255 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 260 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 265 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 270 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 275 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 280 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 285 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 290 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 295 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 300 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 305 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 310 s node/0 30.61/31 (0.99) 1.00 0.88 0.99 0.99 0.94 1.00 1.00 1.00 0.98 1.00 0.99 1.00 1.00 1.00 1.00 0.99 1.00 0.99 1.00 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.93 1.00 1.00 1.00 1.00
LOAD EPOCH 315 s node/0 30.76/31 (0.99) 0.99 0.99 0.99 0.99 0.98 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.98 1.00 0.99 0.99 0.99
LOAD EPOCH 320 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 325 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 330 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 335 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 340 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 345 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 350 s node/0 30.82/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 355 s node/0 30.83/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 360 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 0.97 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 365 s node/0 30.81/31 (0.99) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 0.98 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.98 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 370 s node/0 30.80/31 (0.99) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.98 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.98 0.99 0.99 0.99
LOAD EPOCH 375 s node/0 30.80/31 (0.99) 0.99 1.00 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 1.00 0.99 0.99 1.00 0.99 1.00 0.99 1.00 0.98 0.99 0.99 1.00 0.99 0.99 1.00 1.00 0.99 0.98 0.99 0.99 0.99
LOAD EPOCH 380 s node/0 30.83/31 (0.99) 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 0.99 0.99 1.00 1.00 0.99 1.00 1.00 1.00 0.99 0.99 1.00 1.00
LOAD EPOCH 385 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 390 s node/0 30.88/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 395 s node/0 30.89/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 400 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 405 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 410 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 415 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 420 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 425 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 430 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 435 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 440 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 445 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 450 s node/0 30.89/31 (1.00) 1.00 1.00 1.00 1.00 1.00 0.98 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.98 1.00 1.00 1.00
LOAD EPOCH 455 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 1.00 0.97 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.97 1.00 1.00 1.00
LOAD EPOCH 460 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 1.00 0.98 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.97 1.00 1.00 1.00
LOAD EPOCH 465 s node/0 30.85/31 (1.00) 1.00 1.00 1.00 1.00 1.00 0.97 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.98 1.00 1.00 1.00 0.97 1.00 1.00 1.00
LOAD EPOCH 470 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 475 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 480 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 485 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 490 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 495 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 500 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 505 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 510 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 515 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 520 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 525 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 530 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 535 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 540 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 545 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 550 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 555 s node/0 30.94/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 560 s node/0 30.40/31 (0.98) 1.00 1.00 1.00 1.00 1.00 0.92 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.55 1.00 1.00 0.98 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 565 s node/0 30.02/31 (0.97) 1.00 1.00 1.00 1.00 1.00 0.92 1.00 0.65 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.55 1.00 1.00 0.98 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.96 1.00 1.00 1.00
LOAD EPOCH 570 s node/0 27.44/31 (0.89) 1.00 0.81 0.98 1.00 0.88 0.56 1.00 0.65 1.00 0.98 0.56 1.00 1.00 1.00 1.00 0.56 0.99 1.00 0.56 1.00 0.99 0.81 0.86 0.99 0.85 1.00 0.91 0.56 1.00 1.00 1.00
LOAD EPOCH 575 s node/0 30.71/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.99 0.99 0.99 0.99
LOAD EPOCH 580 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 585 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 590 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 595 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 600 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 605 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 610 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 615 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 620 s node/0 30.79/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 625 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 630 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 635 s node/0 30.80/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 640 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 645 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 650 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 655 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 660 s node/0 30.81/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 665 s node/0 30.82/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 670 s node/0 30.82/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 675 s node/0 30.82/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 680 s node/0 30.83/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 685 s node/0 30.83/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 690 s node/0 30.83/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 695 s node/0 30.84/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 700 s node/0 30.84/31 (0.99) 0.99 0.99 0.99 0.99 0.99 1.00 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 1.00 0.99 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 705 s node/0 30.85/31 (1.00) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 1.00 1.00 0.99 0.99 0.99 0.99 1.00 0.99 1.00 1.00 0.99 0.99 1.00 0.99 1.00 0.99 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 710 s node/0 30.85/31 (1.00) 0.99 1.00 0.99 0.99 0.99 1.00 0.99 1.00 0.99 1.00 1.00 0.99 0.99 0.99 0.99 1.00 0.99 1.00 1.00 0.99 0.99 1.00 0.99 1.00 1.00 0.99 0.99 1.00 0.99 0.99 0.99
LOAD EPOCH 715 s node/0 30.85/31 (1.00) 1.00 1.00 0.99 1.00 0.99 1.00 0.99 1.00 0.99 1.00 1.00 1.00 0.99 1.00 0.99 1.00 1.00 1.00 1.00 0.99 0.99 1.00 0.99 1.00 1.00 1.00 1.00 1.00 0.99 1.00 0.99
LOAD EPOCH 720 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 0.99 1.00 0.99 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 0.99 1.00 1.00 0.99 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00
LOAD EPOCH 725 s node/0 30.86/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00
LOAD EPOCH 730 s node/0 30.87/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 735 s node/0 30.87/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 740 s node/0 30.88/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 745 s node/0 30.88/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 750 s node/0 30.89/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 755 s node/0 30.89/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 760 s node/0 30.91/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 765 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 770 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 775 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 780 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 785 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 790 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 795 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 800 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 805 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 810 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 815 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 820 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 825 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 830 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 835 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 840 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 845 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 850 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 855 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 860 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 865 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 870 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 875 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 880 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 885 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 890 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 895 s node/0 30.92/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 900 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 905 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 910 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 915 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 920 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 925 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 930 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 935 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 940 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 945 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
LOAD EPOCH 950 s node/0 30.93/31 (1.00) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00







Starting at 955 seconds, there is the fiinal phase which as a variable load across the system.


LOAD EPOCH 955 s node/0 28.11/31 (0.91) 1.00 0.54 0.99 1.00 0.82 1.00 1.00 1.00 0.81 0.88 1.00 0.84 1.00 1.00 1.00 1.00 0.90 0.81 1.00 1.00 1.00 0.71 0.77 1.00 0.75 0.92 0.72 1.00 1.00 0.96 0.72
LOAD EPOCH 960 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 965 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 970 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 975 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 980 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 985 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 990 s node/0 4.86/31 (0.16) 0.22 0.00 0.00 0.12 0.00 0.14 0.99 0.21 0.00 0.00 0.30 0.00 0.22 0.07 0.38 1.00 0.00 0.00 0.17 0.29 0.02 0.00 0.00 0.18 0.00 0.00 0.00 0.21 0.35 0.00 0.00
LOAD EPOCH 995 s node/0 26.64/31 (0.86) 0.95 0.95 0.95 0.95 0.95 0.95 0.99 0.21 0.95 0.95 0.95 0.95 0.95 0.95 0.95 1.00 0.95 0.00 0.95 0.29 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.35 0.95 0.95
LOAD EPOCH 1000 s node/0 30.53/31 (0.98) 0.99 0.99 0.99 0.99 0.99 0.99 0.95 0.95 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.95 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.95 0.99 0.99
LOAD EPOCH 1005 s node/0 30.68/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1010 s node/0 30.68/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1015 s node/0 30.68/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1020 s node/0 30.68/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1025 s node/0 30.68/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1030 s node/0 30.68/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1035 s node/0 30.70/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1040 s node/0 30.71/31 (0.99) 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
LOAD EPOCH 1045 s node/0 28.28/31 (0.91) 0.99 0.99 0.92 0.99 0.99 0.99 0.99 0.95 0.54 0.99 0.99 0.99 0.99 0.93 0.99 0.99 0.91 0.99 0.99 0.99 0.48 0.88 0.99 0.85 0.99 0.99 0.99 0.40 0.99 0.63 0.99
LOAD EPOCH 1050 s node/0 12.11/31 (0.39) 0.43 0.36 0.00 0.30 0.28 0.92 0.18 0.00 0.00 0.99 0.84 0.99 0.55 0.00 0.17 0.85 0.00 0.85 0.72 0.99 0.00 0.00 0.78 0.00 0.74 0.11 0.85 0.00 0.13 0.00 0.09
LOAD LOOP 1052 s node/0 29.20/31 (0.94) 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.98 0.95 0.94 0.94 0.94 0.95 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94


The overall load reported by Thorium appears below.


LOAD LOOP 1052 s node/0 29.20/31 (0.94) 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.98 0.95 0.94 0.94 0.94 0.95 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94
node 0 efficiency: 0.94


This means that 6% of the CPU cycles went into the garbage bin and were not used. Usually, this is caused by unavailable operands. In an actor computation, unavailable operands are happening when a worker has 0 actors scheduled in its priority scheduling queue, which means that none of its actors have any message in their message inboxes.


Obviously, actors are cool. And BIOSAL will bring this coolness to genomics, at scale.
There was an error in this gadget