k8s-cluster-simulator: A simulator for evaluating Kubernetes schedulers

Daisuke Taniwaki

2019-04-11 08:00:53


We’re happy to release an open source, Kubernetes cluster simulator, called k8s-cluster-simulator.  The simulator is in the alpha release, and was created by Hidehito Yabuuchi, a PFN internship student in 2018 and part-time employee, along with his mentors, Daisuke Taniwaki and Shingo Omura. This simulator simulates workloads of a Kubernetes cluster and time clock so you can evaluate your Kubernetes scheduler without actually deploying it in the production site.


We have large on-premise GPU clusters, in which researchers run ML jobs of various running duration via Kubernetes. One of our goals is to maximize the utilization of the GPUs for cost-effectiveness while enabling all researchers to have reasonable access. To do this, we developed our own private Kubernetes scheduler and extender (e.g. kube-throttler). However, it’s hard to evaluate new logic in production, because researchers are running jobs, and we should not change the scheduling logic and fairness so often. Of course, we cannot deploy a buggy scheduler that stops the researchers work. Moreover, it is not desirable to stop research to test new scheduling logic in large clusters. Therefore, we started to develop a scheduler simulator for Kubernetes.


We believe the simulator should have the following properties.

  • Require as few changes on scheduler’s implementation and interface as possible.
  • Simulate clock time to accelerate evaluations and also evaluate scheduling logics without being affected by system latencies such as network and internal processes.
  • Simulate workloads as flexibly as possible.
  • Support various output formats for further analysis.


Here’s the simple flow diagram.

The idea is simple. The simulator simulates clocks and ticks the simulated clock at each step of the loop. At each step, the simulator asks submitters if they have pods which should be submitted or deleted in this clock, and schedule the submitted pods to scheduler. Scheduler returns bind and delete events so the simulator can simulate the resource management. Finally, the simulator writes metrics of simulation by metrics loggers.

And here’s the high-level class diagram.

We provide the following two points of customizations for scheduling simulations.


Multiple users can be simulated by adding any number and combination of submitters, with time and number of pods submitted fully customizable through the simulator interface. For example, assume user A tends to submit more pods in the morning and user B tends to submit more pods in the evening. A submitter can be created for each user and plugged into the simulator.
Moreover, as submitters receive metrics from the simulator, they can change behaviors based on the state of a cluster, such as crowded or not.


You have two options for scheduler extensions, depending on the style of Kubernetes scheduler customization. The first scheduler extension mimics the normal Kubernetes scheduler (kube-scheduler), and can be extended with Prioritizer, Extender and Predicate. If you customize your scheduling logic by these kube-scheduler extension points, this is the best approach. As Kubernetes scheduler is a queue-based scheduler, you may want to implement more complicated scheduling logic that doesn’t fit a queue based scheduler, for example, scheduling a new set of pods immediately after receiving multiple pod submissions. For this case, we provide an option to evaluate a scheduler with the interface defined in Kubernetes with a thin wrapper function.


We’re implementing the following features before the beta phase to support more realistic cluster environments simulations.

  • More isolation between components (e.g. supporting RPC interface for a scheduler and submitter)
  • Provide common submitter implementations (e.g. typical probabilistic distributions(Uniform, Binomial, Poisson, etc.))
  • Support various cluster events (node failures, accidental pods failures, node addition/removal, etc.)
  • Support plottable output formats in popular plotter tools (matplotlib, gnuplot etc.)

Please try it out! We’re waiting for your feedback!

ChainerRL Visualizer: Deep RL Agent Visualization Library


2019-03-19 12:40:46

This post is contributed by Mr. Takahiro Ishikawa, who was an intern last year and now works as a part-time engineer at PFN.

We have released ChainerRL Visualizer, which visualizes the behaviors of agents trained by ChainerRL on the Web browser.

My name is Takahiro Ishikawa, who participated in PFN 2018 internship and currently work as a part-time engineer.

This library is developed in the aim of “making debugging of deep RL implementations easier” and “contributing to understanding of how deep RL agents work”.
It enables to interactively observe the behaviors of trained deep RL agents on the Web browser.
This library is easy to use. All you have to do is to pass the `agent` object implemented in ChainerRL and the `env` object that satisfies a specific interface to the `launch visualizer` function provided by this library, along with a few options.

from chainerrl_visualizer import launch_visualizer

# Prepare agent and env object here

# Prepare dictionary which explains meanings of each action
  0: 'hoge',
  1: 'fuga',

    agent,                           # required
    env,                             # required
    ACTION_MEANINGS,                 # required
    port=5002,                       # optional (default: 5002)
    log_dir='log_space',             # optional (default: 'log_space')
    raw_image_input=False,           # optional (default: False)
    contains_rnn=False,              # optional (default: False)

After executing this script, a local Web server will be launched and the following features will be provided on the Web browser.

1. Roll-out one episode (or specified steps)

You can tell the agent to run one episode (or specified steps) from the UI, then the outputs of the agent model will be visualized in chronological order.
In the following video, the probabilities of the next action and the state value of the agent trained with A3C are visualized.

2. Tick timestep and visualize the behaviors of environment and agent

In the following video, the agent can be moved back and forth and the outputs in each step are visualized along with the behavior of the environment. The pie chart on bottom-right shows the probabilities of the next action of each step.

3. Saliency map

If the input of the model is raw pixels, the UI can visualize saliency map, which shows the specific sub-area to which the agent pays attention. This feature is implemented based on the paper Visualizing and Understanding Atari Agents.
In the following video, saliency maps of the agent trained with CategoricalDQN are visualized over the image of the environment.
For now, this feature allows us to specify the number of steps for which saliency maps are created because the computational cost of creating saliency maps is very expensive.

4. Miscellaneous visualizations

Various ways of visualization for each type of agent are supported.
For example, the value distributions of the agent trained with CategoricalDQN are visualized in the following video.

Quickstart guides are provided.

For now, almost all of the visualization tools in deep learning have focused on visualizing scores and other metrics along with the progress of the model training. ChainerRL Visualizer is the original visualizer among them as it can interactively and dynamically visualize the behaviors of the deep RL agent and environment themselves.

Atari Zoo, which was released by Uber Research, is developed for a similar purpose. Atari Zoo aims at accelerating research for understanding deep RL agent, providing trained models, and analyzing tools for the frozen models. It enables researchers to participate in the research for understanding deep RL agent even if they don’t have enough computing resources.

ChainerRL Visualizer is different from Atari Zoo in the sense that “all” kinds of agents in ChainerRL can be dynamically analyzed during training by the visualizer while Atari Zoo is only for visualizing already-trained models in the repository and those models are limited to the ALE environments and specific algorithms where the architecture looks like ( raw image => Conv => Conv .. => FC .. ).

There are also other visualization tools with similar motivations, such as DQNViz that visualizes the behaviors of DQN agents and its various metrics during the training.

Though much effort has been dedicated to improve the performance of deep RL algorithms on benchmark tasks, less effort has been paid for understanding what deep RL agents learn by deep RL algorithm, and analyzing how the trained agents behave. However, the research for understanding deep RL agent as seen in the visualizations above will expand in the future.

ChainerRL Visualizer is now in beta version and still does not have sufficient features for deeply analyzing deep RL agents. So continuous development is needed in order to contribute to the emerging research area of understanding deep RL agent. We welcome you to participate in the development of ChainerRL Visualizer to add new features or to improve existing features through OSS collaboration.

Technologies behind Distributed Deep Learning: AllReduce


2018-07-10 15:11:24

This post is contributed by Mr. Yuichiro Ueno, who were a Summer intern in 2017 and a part time engineer at PFN.

If the mathematical expressions are not displayed correctly, please reload the page via this link.

Hello, I am Yuichiro Ueno. I participated in a summer internship program at PFN in 2017, and I currently work as a part-time engineer. I am an undergraduate student at Tokyo Institute of Technology, and my research topic is High-Performance, Parallel and Distributed Computing.

In this blog post, I will describe our recent study on algorithms for AllReduce, a communication operation used for distributed deep learning.

What is Distributed Deep Learning?

Currently, one of the significant challenges of deep learning is it is a very time-consuming process. Designing a deep learning model requires design space exploration of a large number of hyper-parameters and processing big data. Thus, accelerating the training process is critical for our research and development. Distributed deep learning is one of the essential technologies in reducing training time.

We have deployed a private supercomputer “MN-1” to accelerate our research and development process. It is equipped with 1024 NVIDIA(R) Tesla(R) P100 GPUs and Mellanox(R) InfiniBand FDR interconnect and is the most powerful supercomputer in the industry segment in Japan. By leveraging MN-1, we completed training a ResNet-50 model on the ImageNet dataset in 15 minutes.

Communication among GPUs is one of the many challenges when training distributed deep learning models in a large-scale environment. The latency of exchanging gradients over all GPUs is a severe bottleneck in data-parallel synchronized distributed deep learning.

How is the communication performed in distributed deep learning? Also, why is the communication so time-consuming?

The Importance of AllReduce in Distributed Deep Learning

In synchronized data-parallel distributed deep learning, the major computation steps are:

  1. Compute the gradient of the loss function using a minibatch on each GPU.
  2. Compute the mean of the gradients by inter-GPU communication.
  3. Update the model.

To compute the mean, we use a collective communication operation called “AllReduce.”

As of now, one of the fastest collective communication libraries for GPU clusters is NVIDIA Collective Communication Library: NCCL[3]. It achieves far better communication performance than MPI, which is the de-facto standard communication library in the HPC community. NCCL is indispensable for achieving high performance in distributed deep learning using ChainerMN. Without it, the ImageNet 15-min feat could not have been achieved[2].

Our researchers and engineers were curious about NCCL’s excellent performance. Since NCCL is not an open source library, we tried to understand the high performance of the library by developing and optimizing an experimental AllReduce library.

Algorithms of AllReduce

First, let’s take a look at the AllReduce algorithms. AllReduce is an operation that reduces the target arrays in all processes to a single array and returns the resultant array to all processes. Now, let P the total number of processes. Each process has an array of length N called \(A_p\). \(i\)-th element of the array of process \(p ~(1 \leq p \leq P)\) is \(A_{p,i}\).

The resulting array B is to be:
$$ B_{i}~~=~~A_{1,i}~~Op~~A_{2,i}~~Op~~…~~Op~~A_{P,i} $$

Here, Op is a binary operator. SUM, MAX, and MIN are frequently used. In distributed deep learning, the SUM operation is used to compute the mean of gradients. In the rest of this blog post, we assume that the reduction operation is SUM. Figure 1 illustrates how the AllReduce operation works by using an example of P=4 and N=4.


Fig.1 AllReduce Operation


There are several algorithms to implement the operation. For example, a straightforward one is to select one process as a master, gather all arrays into the master, perform reduction operations locally in the master, and then distribute the resulting array to the rest of the processes. Although this algorithm is simple and easy to implement, it is not scalable. The master process is a performance bottleneck because its communication and reduction costs increase in proportion to the number of total processes.

Faster and more scalable algorithms have been proposed. They eliminate the bottleneck by carefully distributing the computation and communication over the participant processes.
Such algorithms include Ring-AllReduce and Rabenseifner’s algorithm[4].

We will focus on the Ring-AllReduce algorithms in this blog post. This algorithm is also employed by NCCL [5] and baidu-allreduce[6].


Let us assume that P is the total number of the processes, and each process is uniquely identified a number between 1 and P. As shown in the Fig.2, the processes constitute a single ring.


Fig.2 Example of a process ring


First, each process divides its own array into P subarrays, which we refer to as “chunks”. Let chunk[p] be the p-th chunk.

Next, let us focus on the process [p]. The process sends chunk[p] to the next process, while it receives chunk[p-1] from the previous process simultaneously (Fig.3).


Fig.3 Each process sends its chunk[p] to the next process [p+1]


Then, process p performs the reduction operation to the received chunk[p-1] and its own chunk[p-1], and sends the reduced chunk to the next process p+1 (Fig.4).


Fig.4 Each process sends a reduced chunk to the next process


By repeating the receive-reduce-send steps P-1 times, each process obtains a different portion of the resulting array (Fig.5).


Fig.5 After P-1 steps, each process has a reduced subarray.


In other words, each process adds its local chunk to a received chunk and send it to the next process. In other words, every chunk travels all around the ring and accumulates a chunk in each process. After visiting all processes once, it becomes a portion of the final result array, and the last-visited process holds the chunk.

Finally, all processes can obtain the complete array by sharing the distributed partial results among them. This is achieved by doing the circulating step again without reduction operations, i.e., merely overwriting the received chunk to the corresponding local chunk in each process. The AllReduce operation completes when all processes obtain all portions of the final array.

Let’s compare the amount of communication of Ring-AllReduce to that of the simple algorithm we mentioned above.

In the simple algorithm, the master process receives all the arrays from all other processes, which means the total amount of received data is \((P – 1) \times N\). After the reduction operation, it sends the arrays back to all the processes, which is again \((P – 1) \times N\) data. Thus, the amount of communication of the master process is proportional to P.

In the Ring-AllReduce algorithm, we can calculate the amount of communication in each process in the following way. In the earlier half of the algorithm, each process sends an array, the size of which is \(N/P\), \(P-1\) times. Next, each process again sends an array of the same size P-1 times. The total amount of data each process sends throughout the algorithm is \(2N(P-1) / P\), which is practically independent of P.

Thus, the Ring-Allreduce algorithm is more efficient than the simple algorithm because it eliminates the bottleneck process by distributing computation and communication evenly over all participant processes. Many AllReduce implementations adopt Ring-AllReduce, and it is suitable for distributed deep learning workloads as well.

Implementation and Optimization

The Ring-AllReduce algorithm is simple to implement if basic send and receive routines are given. baidu-allreduce[6] is built on top of MPI using MPI_Send and MPI_Recv.

However, we tried to do further optimizations by using InfiniBand Verbs API instead of MPI. To fully utilize hardware resources, the algorithm has multiple stages such as memory registration (pinning), cuda-memcpy, send, reduction, receive, and memory deregistration, and they are processed in a software pipeline. Here, “registration” and “deregistration” are pre- and post-processing stages for DMA data transfer. Such low-level operations are abstracted out in MPI send/receive routines, and we are not able to split them into pipeline stages. To increase the granularity of the communication and computation, we further divide chunks into smaller sub-chunks. Also, we introduce a memory pool to hide memory allocation overhead.

Performance Evaluation

For performance evaluation, we compared our prototype (called PFN-Proto) to several AllReduce implementations shown in the Appendix.

Our prototype implementation currently focuses on inter-node communication; it is not optimized for intra-node communication using shared memory or GPU-to-GPU DMA data transfer. We evaluated the implementations in one process per node configuration. For Open MPI [7], our company is yet to introduce the latest version 3.x series because the most recent series has a minor issue related to GPUDirect. So, we used version 2.1.3 instead.

We used our private supercomputer MN-1 for this experiment, as shown in the “Experimental environment” below. Eight processes were run, where one process ran on one computing node. The target data size is 256MB.


Fig.6 AllReduce Execution Time


Figure 6 shows the result of the evaluation. Each bar indicates the median of 10 runs. The error bar indicates confidence intervals. The details of each library are shown in the “software versions” below.

First, let’s look at the median values. Our experimental implementation, PFN-Proto, showed the fastest time, which is approximately 82%, 286%, 28%, 1.6% better than ompi, ompi-cuda, Baidu, NCCL, respectively. One thing worth mentioning, which is not in the graph, is that Baidu achieved the fastest single-run time 0.097 [s] among all the five libraries.

Next, we focus on the variance of the performance. Maximum and minimum runtimes of PFN-Proto and NCCL are within +/- 3% and +/- 6%, respectively. In contrast, Baidu’s maximum value is 7.5x its median, because its first run takes a very long time. Its maximum runtime excluding the first run is +9.6% over the median, which is still more significant than those of NCCL and PFN-Proto.

Our hypothesis is that the performance variances of MPI and MPI-based routines are attributed to MPI’s internal behavior related to memory operations. MPI’s programming interface hides memory allocation and registration operations for InfiniBand communication. Timings of such operations are not controllable from those AllReduce implementations.


We described the AllReduce communication pattern, which is very important for distributed deep learning. In particular, we implemented the Ring-AllReduce algorithm in our experimental communication library, and it achieved comparable performance to NCCL library released by NVIDIA. The implementation efficiently utilizes available hardware resources through advanced optimization such as using InfiniBand Verbs API and software pipelining. We continue our research and development on accelerating distributed deep learning.

Caveats: our implementation is experimental, and we only demonstrated the performance on our in-house cluster. NCCL is a highly practical and usable library thanks to its performance suitability and availability on a wide range of IB-connected NVIDIA GPU clusters.


I would like to thank my mentors and the team for the kind support and feedbacks. Since my internship period last year, I have been give access to rich computation resources, and it has been a fantastic experience.

From Mentors:

This project started with a question: “how does NCCL achieve such high and stable performance?” It is an advanced and experimental topic, but Mr. Ueno achieved a remarkable result with his high motivation and technical skills.

PFN is looking for talents, not only in the deep learning/machine learning field but a full range of technical areas from hardware to software. Please visit https://www.preferred-networks.jp/en/jobs for more information.

For students who are interested in high-performance computing and other technologies, PFN offers international internship opportunities, as well as domestic programs for Japanese students. The application period has finished this year, but be ready for the next opportunity!


[1] Preferred Networks officially released ChainerMN version 1.0.0
[2] Akiba, et al., “Extremely Large Minibatch SGD: Training ResNet-50 on ImageNet in 15 Minutes”
[3] NVIDIA Collective Communications Library
[4] Rabenseifner, “Optimization of Collective Reduction Operations”, ICCS 2004
[5] Jeaugey, “Optimized Inter-GPU Collective Operations with NCCL”, GTC 2017
[6] baidu-allreduce
[7] Open MPI
[8] New ChainerMN functions for improved performance in cloud environments and performance testing results on AWS
[9] Tsuzuku, et al., “Variance-based Gradient Compression for Efficient Distributed Deep Learning”, In Proceedings of ICLR 2018 (Workshop Track)


Software versions

Implementation Version Note
MPI (ompi) Open MPI 2.1.3 Trasnfer from CPU memory to CPU memory (No GPU involved)
CUDA-aware MPI Open MPI 2.1.3 From GPU memory to GPU memory
baidu-allreduce (baidu) A customized version of baidu-allreduce, based on commit ID 73c7b7f https://github.com/keisukefukuda/baidu-allreduce
NCCL 2.2.13

Experimental environment

  • Intel(R) Xeon(R) CPU E5-2667 * 2
  • Mellanox ConnectX-3 InfiniBand FDR (56Gbps) x2
  • NVIDIA Pascal P100 GPU (with NVIDIA Driver Version 375.20)

Guest blog with Hai, a former intern at PFN


2018-04-09 17:34:34

This is a guest post in an interview style with Hai Nguyen, a former intern 2017 summer at Preferred Networks, whose research has been accepted at one of the NIPS 2017 workshops. After finishing PFN internship, he joined Kyoto University as a Ph.d student.

“Semi-supervised Learning of Hierarchical Representations of Molecules Using Neural Message Passing,” Hai Nguyen, Shin-ichi Maeda, and Kenta Oono; NIPS Workshop on Machine Learning for Molecules and Materials, 2017. (Link, arXiv)

more »

2018 Intern Results at Preferred Networks (Part 1)


2017-10-18 07:44:24

This summer, Preferred Networks accepted a record number of interns in Tokyo from all over the world. They tackled challenging tasks around artificial intelligence together with PFN mentors. We appreciate their passion, focus, and designation to the internship.

In this post, we would like to share some of their great jobs (more to come).

more »

Guest blog with Weihua, a former intern at PFN


2017-09-11 16:29:13

This is a guest post in an interview style with Weihua Hu, a former intern at Preferred Networks last year from University of Tokyo, whose research has been extended after the internship and accepted at ICML 2017.

“Learning Discrete Representations via Information Maximizing Self-Augmented Training,” Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama; Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1558-1567, 2017. (Link)

more »

Publish 2017 PFN Internship Coding Tasks

Kosuke Nakago

2017-08-01 11:23:51

* Japanese blog is also written here.

Preferred Networks (PFN) organizes two-month-long summer internship program for students in August and September every year.

The number of applications is increasing year by year. This year we have received the highest number of applications ever and the interview and selection process has been finished.

PFN 2017 Summer Internship Program


Then we have published our intern coding tasks on github.


more »