2018 PFN Internship Coding Tasks

Mitsuru Kusumoto

2018-07-25 18:01:08

We have published the coding task used in the screening process of PFN internship 2018. It is available on GitHub.


Hello, I’m Kusumoto, an engineer in PFN. In PFN, we organize a summer internship from August to September. Coding task is what we asked applicants to solve during the screening process to check the applicants skill level at programming, problem-solving, etc. Because we are hiring in a wide range of fields, including machine learning, this year we prepared five kinds of problems — “Machine learning/Mathematics,” “Back-end,” “Front-end,” “Processor/Compiler,” and “Chainer.” Applicants would choose one of these tasks according to the theme they have chosen.

This year, we received many more applications than previous years.  With this increasing applications, we increased the number of acceptances we offer.

The detail of the coding task is as follows.

  • Machine learning/Mathematics: You are asked to implement an algorithm of adversarial examples for some neural network model. You need to write a simple report on the performance of the algorithm as well.
  • Back-end: You are asked to create a tool that analyzes some log file.
  • Front-end: You are asked to develop a prototype of an annotation tool for speech videos.
  • Processor/Compiler: You are asked to optimize the code of matrix multiplication. Further, you need to design a hardware circuit of matrix multiplication.
  • Chainer: You are asked to implement a training code for some model, using Chainer.

Every year, we carefully create the coding task with creative sense. I hope these tasks become a good practice problem to learn what you want to study.

I created Machine learning/Mathematics task this year. Let me briefly write what I usually consider when creating problems.

  • Make the problem not require specific knowledge: In PFN, we hire people from a wide range of fields. We make problems solvable without any particular experience or knowledge of machine learning itself as possible so that various people can tackle the problems.
  • Make the problem setting close to actual research: In the field of machine learning or deep learning, we often repeat the process like “find a good theme -> consider a novel method -> implement it -> summarize and evaluate the result.” Our problem setting imitates the latter part of this process. It may be similar to an assignment in a university class.
  • Ask interesting theme: Lots of interesting research results appear every day in the area of machine learning/deep learning. The coding task should also be interesting. This year, the task was on the method called Fast Gradient Signed Method, which shows far better performance than random noise baseline method. I believe that this was a fun experiment in and of itself.
  • Do not make the problem too difficult: It is not good if the problem is too time-consuming. Our objective is that a student with enough skills can solve the problem within one or two days.

We evaluate the submitted code and report from various perspective. Not only correct implementation is important. That code is readable for other engineers, that there is an appropriate amount of unit-tests, and that other engineers can easily replicate the result are also evaluated.

In addition to the code, summarization of the result and evaluation of the proposed method are also important factors in experiments. Reporting the result to other people is also important especially when you work in a team. We will check the submitted report to see how good these factors are.

If you are interested in PFN, we look forward to receiving your application in the next internship program.

We are also hiring full-time employees in Tokyo, Japan and San Mateo, California. Please refer to the job page below.


About the Release of the DNN Inference Library Menoh

Shintarou Okada

2018-06-26 14:36:43

Don’t you want to use languages other than Python, especially in the deep learning community?

Menoh repository : https://github.com/pfnet-research/menoh

I am Shintaro Okada, developer of Menoh. This article will give you an introduction to Menoh and describe my motivation for the development.

Menoh is a library that can read trained DNN models in the ONNX format for inference. I wrote it in C++, but it has a C language interface. So, its functions can easily be called from other languages as well.  At release, C++, C#, and Haskell wrappers are available, and Ruby, NodeJS, and Java (JVM) wrappers are in the pipeline. I leveraged Intel’s MKL-DNN backend, so that even without using a GPU, it does fast inference on Intel CPUs. Menoh makes it possible to deploy your trained Chainer model to an application programmed in languages other than Python in no time.

In the meantime, why is it Python, rather than Ruby, that has dominated the deep learning community? Why not R, Perl, or C++? There are many programming languages out there, which could have been widely used to write deep learning training frameworks instead of Python (Of course, each language is useful in its own way and whether the level of such probability is high or low would depend on each language.)  Python has the hegemony of our universe, but in another universe, Lisp may hold supremacy. That said, we have no choice but to live in this universe where we need to part with sweet (), {}, or begin/end and write blocks with pointless indentation in order to implement the deep-something under today’s Python rule. What a tragedy. I wish I could say so without any reservations, but Python is a good programming language.

Yes, Python is a good language. It comes with the myriad of useful libraries, Numpy in particular, is dynamic-typed, and handles the Garbage Collection function. All of these make the trial-and-error process of writing code to train and implement DNNs easier. Chainer is a flexible and easily extensible DNN framework, and it is, of course, written in Python. Chainer is amazingly easy to use thanks to its magic called Define-by-Run. Sure, another language could’ve been used to implement the Define-by-Run feature. But, if it had not been for Python, the code would’ve been more complicated and its implementation more painful. It is obviously the Python language itself that plays a part of Chainer’s user-friendliness.

For us, to study DNN is not difficult, since we have Chainer backed by easy-to-use Python. We can write and train DNN models without a hitch.  It’s heavenly. On the flip side, to deploy trained DNN models is where the pain starts.

It may be an exaggeration to use the word pain. I should just use Chainer as is when deploying to a Python-friendly environment and there is no pain from the beginning to end (at least in the deployment work).  But, what if one’s environment doesn’t allow Python? Outside the lab, one may not use Python due to security or computing resource-related issues, and Python may be useless in areas dominated by other languages. There are a variety of situations like this (For example, Ruby enjoys enduring popularity in the Web community even today). Some DL frameworks have been designed with deployment taken into consideration and allow users to write DNNs in C or C++ without using Python. But, they often require a lot of effort to implement and have too little wrappers to make it easy to use. While the knowledge of training DNNs has been widespread, the deployment of DNNs has been far from developed.

I just wanted to build trained models into my applications, but it’s been a hassle.

This is why I decided to develop Menoh.

Menoh is a result of my project under PFN’s 20% rule. It’s our company policy that allows PFN members to spend 20% of their time at work on their favorite tasks or projects, aside from formally assigned tasks. At PFN, we have various other 20% projects and study sessions both by individuals and groups progressing at the moment.  

As a matter of fact, Menoh is based on a library called Instant which I developed in my personal project in December 2017. Since then, I have taken advantage of the 20% time to enhance its functionality. Along the way, some of my colleagues gave me valuable advice on how to better design it, and others volunteered to write other language wrappers. Thanks to the support of all these members, Instant has finally been released as an experimental product in pfn-research under the new name Menoh. I plan to continue to spend 20% of my time improving it. I hope you will use Menoh and I would appreciate it if you would open new issues for suggestions or any bug you may find.   

Research Activities at Preferred Networks

Takuya Akiba

2018-06-18 15:03:33

Hello, I am Takuya Akiba, a newly appointed corporate officer doubling as chief research strategist. I would like to make an inaugural address as well as sharing my view on research activities at PFN.

What does research mean at PFN?

It is very difficult to draw a line between what is research and what is not, and it is not worthwhile to go out of your way to define it. Research means to master something by sharpening one’s thinking. It is usually understood that research is to deeply investigate into and study a subject in order to establish facts and reach new conclusions about it.

Almost all projects at PFN are challenging, entail great uncertainty, and require no small amount of research. In most cases, research and development of core deep learning technologies, not to mention their applications, does not go well without selecting an appropriate method or devising a nontrivial technique according to a task or data. We are also dealing with unknown problems that arise when trying to combine technologies in multiple fields such as robotics, computer vision, and natural language processing. In addition to that, when we design a cluster, manage its resources, and work on a deep learning framework, there are many things to consider and solve by trial and error in order to make them useful and highly efficient while satisfying requirements that are specific to deep learning at the same time.

Among them, especially the following projects involve a great deal of research:

  • Academic research whose findings are worthy to be published in a paper
  • Prepare and perform a demonstration at an exhibition
  • Participation in competitions
  • Solve open social problems that have been left unsolved

We have already started producing excellent results in these activities, with our papers continuously being accepted by a wide range of top conferences, including ICML, CVPR, ACL, and CHI.  We are not only publishing more papers than before, but our papers are receiving global attention. One of our researchers won the Best Paper Award on Human-Robot Interaction at ICRA’18 while another researcher was chosen as Oral at ICLR’18 recently. With regards to demonstrations, we displayed our work at several exhibitions including CES 2016 and ICRA 2017. We also took part in many competitions and achieved great results at Amazon Picking Challenge 2016, IPAB drug discovery contest, and the like.

Why does PFN do research?

What is the point of researching what doesn’t seem to bring immediate profits to a business like PFN? For example, writing a research paper means that the researcher will need to spend a good amount of his/her precious time in the office, and publishing it would be tantamount to revealing technology to people outside the company. You may be wondering whether activities like academic research and paper writing have a negative impact on the company.

At PFN, however, we highly value such activities and will even continue to increase our focus on them. It is often said that the “winner takes all” in the competitive and borderless world of computer and AI businesses. In order to survive in this harsh business environment, we need to obtain a world-class technological strength through these activities and retain a competitive edge to stay ahead of rivals. Building a good patent portfolio is practically important as well.

Also, I often hear some say, “Isn’t it more efficient to focus on practical applications of technologies in papers published by others?” It is certain, however, that leading organizations in the world will be far ahead by the time those papers come out and catch our eyes. Besides, the information we can get from reading papers is very limited. Often times, we need to go through a process of trial and error or ask authors before successfully reproducing the published result or need to apply it to other datasets to learn its negative aspect that is not written in the paper. These would take an incredible amount of time.  Alan Kay, who is known as the father of personal computers, once said: “The best way to predict the future is to invent it.” Now that we have made one great achievement after another in multiple research fields, his words are beginning to hit home. They carry a great sense of reality.

Furthermore, we not only research within the company but also place great importance on presenting our study results to contribute to the community. This not only helps make our presence felt both in and out of Japan but will eventually accelerate the advances of the technology necessary to realize our objectives if we can inspire other professionals in the world to undertake follow-on research based on the techniques we publish.  This is why we are very active in making the codes and data used in our research open to the public as well as releasing software as an OSS.  Our researchers also peer-review papers in academic journals in work hours as part of our contributions to the academic community.

What kind of research are we promoting?

We are working on an extensive range of research fields, centering around deep learning. They include computer vision, natural language processing, speech recognition, robotics, compiler, distributed processing, dedicated hardware, bioinformatics, and cheminformatics. We will step up efforts to further promote these research activities based on the following philosophy.

Legitimately crazy

Any research should be conducted not only by looking at the world today but also with an eye for the future. The value of research shouldn’t be judged using only the common knowledge now, either. An unpractical method that requires heavy computation or a massive process that no one dares to do in today’s computing environment is not necessarily negative. For example, we succeeded in a high-profile project where we completed training an image recognition model within minutes through distributed processing on 1,024 GPUs last year. Not only the unprecedentedly high speed that we achieved was extraordinary but the scale of the experiment itself – we used 1,024 GPUs all at once – was out of the ordinary.  It may not be realistic to use 1,024 GPUs for ordinary training. Then, is research like this not worth conducting?

Computational speed is yet continuing to improve. Especially for deep learning, people are keen to develop a chip dedicated to it. According to an analysis released by OpenAI, the computational power used in large-scale deep learning training has been doubling every 3.5 months. Such settings seem incredible now but may become commonplace and widely available to use in several years. Knowing what will happen and what will be a problem at that time and thinking how to solve them and what we will be able to do – to quickly embark on this kind of far-sighted action is extremely important. The experiment using 1,024 GPUs mentioned above was the first step in our endeavor to create an environment that would make such large-scale experiments nothing out of the ordinary. We are taking advantage of having a private supercomputer and a team specializing in parallel, distributed computing to realize this.

Out into the world

You should aspire to lead the world in your research regardless of the research field. Having a technological strength that is cut above the rest of the world can bring great value. Not act too inwardly, but you should look outside the company and take the lead. Publishing a paper that will be highly recognized by global researchers, becoming among the top in a competition, or getting invited to give a lecture on a spotlighted subject – these are the kind of activities you should aim for. In reality, it may be difficult to outdistance the world in every research area. But, when you are conscious of and aiming to reach the top spot, you will know where you stand relative to the most advanced research in the world.

It is also very important to work your way into an international community. If you become acquainted with leading researchers and they recognize you are to be reckoned with, you will be able to exchange valuable information with them. Therefore, PFN is encouraging its members to make a speech outside the company and making sure to publicize those who have made such contributions.

Go all-out to expand

Any research should not be kept behind closed doors but expanded further. For example, compiling a paper on your research is an important milestone, but it’s not the end of your research project. You shouldn’t undertake research just for the sake of writing a paper. In deep learning, a common technique can sometimes work effectively in different application fields. I have high hopes that PFN members will widen the scope of their research for extensive applications by working with other members from different study areas. Having people with a variety of expertise is one of our company’s forte. If possible, you should also consider developing new software or giving feedback to make an in-house software serviceable. It would also be great if your research would result in improving day-to-day business activities. Although I emphasized the importance of the number of research papers accepted by top conferences, I have no intention to evaluate R&D activities solely based on the number of papers or the ranking of a conference by which the paper was accepted.

To break into one of the top places, you need to utilize your skills fully while being highly motivated. Having said that, you don’t need to do everything by yourself. You should positively consider relying on someone who has an ability that you don’t have. This is not only about technical skills but also paper writing. Even if you put a lot of efforts into your research and made interesting findings, your paper could be underestimated, thus not accepted by an academic conference due to misleading wording or other reasons caused by your lack of experience or knowledge of writing a good paper. PFN has many senior researchers with years of experience in basic research who can teach young members not only about paper writing but also how to conduct a thorough investigation as well as the correct way to compare experiments. I will ensure that our junior members can receive the support of these experienced researchers.

The appeal of working on R&D at PFN

What are the benefits of engaging in research and development at PFN for researchers and engineers?

One of the most attractive points is that your superb individual skills as well as organizational technical competence are truly being sought after and can make a big difference in PFN’s technical domains, mainly deep learning. This means that the difference of technical skills, whether they are individual or team, will be hugely reflected on the outcome of research. So, having high technological skills will lead directly to a high value. Your individual skills and the ability to put them to good use in a team are highly regarded.  This is particularly a good thing if you are confident about or motivated to improve your technical capability.

It is also worth mentioning that we have flexibility in the way we do research. Some researchers devote 100% of their time to pure basic research, and they have formed a team entirely dedicated to it, which we even plan to expand. Some are handling business-like problems while progressing their main research activities.  Joint research with the academia is also actively being carried out. Some members are working part-time to take a doctor’s course in graduate school to polish their expertise.

We are also putting extra effort into enhancing our in-house systems to promote R&D activities. PFN provides full support to its members taking up on new challenges by trusting and giving considerable discretion to them and flexibly dealing with needs to improve such in-house systems or requests for assets that are not available in the company. For example, all PFN members are eligible to spend up to 20% of their work hours at their own discretion. This 20% rule enables us to test our ideas right away. So, I am expecting our motivated members to produce unique ideas and launch new initiatives one after another.

Everything from the algorithm, to software framework, to research supporting middleware, and to hardware is important in deep learning and other technical domains that PFN engages in.  It is also one of the appealing points that at PFN you get to chat with experts in a wide range of research fields such as deep learning, reinforcement learning, computer vision, natural language processing, bioinformatics, high-performance computation, distributed system, network, robotics, simulation, data analysis, optimization, and anomaly detection. You can ask them about subjects you’re not familiar with, exchange practical problems, work together on a research subject, and so on.

In conclusion

Finally, let me write a little bit about my personal aspirations. I have been given the honor that is more than I deserve of serving as the corporate officer and chief research strategist at a company where many esteemed professionals are doing splendid work in a wonderful team whose great abilities keep inspiring me everyday. At first, I hesitated whether I should accept this important role that seemed too big for someone like me and I was afraid that I might not be able to live up to their expectations.

I was a researcher in academia before joining PFN and worked as an intern for several corporate labs outside Japan in my university days because I was interested in becoming a researcher in a corporate environment. During one of the internships, they carried out layoffs, and I saw right before my eyes all researchers in the lab, including my mentor, being dismissed.  I experienced firsthand the toughness of continuing to make research activities meaningful enough for a company.

Despite the bitter experience, I believe PFN should promote research as a corporate activity and generate value from maintaining it in a healthy state. This is not an easy but very exciting and meaningful task, and this is exactly the area where my experiences and knowledge obtained in various places could be useful. So, I decided to do my best to make contributions in this new role.

I excel at combining several areas of my expertise such as researching, engineering, deep learning and distributed computation into creating a new value as well as elaborating and executing a competitive strategy. I will try to exploit these strong points of mine to the fullest in broader areas.

PFN is looking for researchers and engineers who are enthusiastic about working with us on these research activities.

NIPS’17 Adversarial Learning Competition

Takuya Akiba

2018-04-20 18:03:49

PFN members participated in NIPS’17 Adversarial Learning Competition, one of the additional events to the international conference on machine learning NIPS’17 held on Kaggle, and we came in fourth place in the competition. As a result, we were invited to make a presentation at NIPS’17 and also have written and published a paper explaining our method. In this article, I will describe the details of the competition as well as the approach we took to achieve the forth place.

What is Adversarial Example?

Adversarial example [1, 2, 3] is a very hot research topic and is said to be one of the most biggest challenges facing the practical applications of deep learning. Take image recognition for example. It is known that adversarial examples can cause CNN to recognize images incorrectly just by making small modifications to the original images that are too subtle for humans to notice.


The above are sample images of adversarial examples (ref. [2]). The left image is a picture of a panda that has been classified correctly as a panda by CNN. What we have in the middle is maliciously created noise. The right image looks the same as the left panda, but it contains the slight noise superimposed on it, causing CNN to classify it not as a panda but gibbon with a very high confidence level.

  • [1] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, Rob Fergus: Intriguing properties of neural networks. CoRR abs/1312.6199 (2013)
  • [2] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy:Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572 (2014).

NIPS’17 Adversarial Learning Competition

NIPS’17 Adversarial Learning Competition we took part in was a competition related to adversarial examples as the name suggests. I will explain two types of competition events: Attack and Defense tracks.

Attack Track

You must submit a program that adds noise to input images with malicious intent to convert them to adversarial examples. You will earn points depending on how well the adversarial images generated by your algorithm can fool image classifiers submitted in the defense track by other competitors. To be specific, your score will be the average rate of misclassifications made by each submitted defense classifier. The goal of the attack track is to develop a method for crafting formidable adversarial examples.

Defense Track

You must submit a program that returns a classification result for each input image. Your score will be the average of accuracy in classifying all adversarial images generated by each adversarial example generator submitted in the attack track by other teams. The goal of defense track is to build a robust image classifier that is hard to fool.

Rules in Detail

Your programs will have to process multiple images. Adversarial programs in the attack track will only be allowed to generate noise up to the parameter ε, which is given when they are run. Specifically, attacks can change R, G, and B values of each pixel on each image only up to ε.  In other words, L∞ norm of the noise needs to be equal to or less than ε. The attack track is divided into non-targeted and targeted subsections, and we participated in the non-targeted competition, which is the focus of this article.  For more details, please refer to the following official competition pages [4, 5, 6].

Standard Approach for Creating Adversarial Examples

We competed in the attack track. First, I will describe standard methods for creating adversarial examples. Roughly speaking, the most popular FGSM (fast gradient sign method) [2] and almost all the other existing methods take the following three steps:

  1. Classify the subject image by an image classifier
  2. Use backpropagation though to the image to calculate a gradient
  3. Add noise to the image using the calculated gradient


Methods for crafting strong adversarial examples have been developed by exercising ingenuity in deciding whether these steps should be carried out only once or repeated, how the loss function used in backpropagation should be defined, how the gradient should be used to update the image, among other factors. Similarly, most of the teams seemed to have used this kind of approach to build their attacks in the competition.

Our Method

Our approach was to create a neural network that produces adversarial examples directly, which differs greatly from the current major approach described above.


The process to craft an attack image is simple and all you need to do is just give an image to the neural network. It will then generate an output image, which is adversarial example in itself.

How We Trained the Attack Network

How we trained the attack networkThe essence of this approach was, of course, how we created the neural network. We henceforth call our neural network that generates adversarial examples “attack network.” We trained the attack network by repeating the iteration of the following steps:

  1. The attack network to generate an adversarial example
  2. Existing trained CNN to classify the generated adversarial example
  3. Use backpropagation on the CNN to calculate a gradient of the adversarial example
  4. Further backpropagation on the attack network to update the network with the gradient


We designed the architecture of the attack network to be fully convolutional. A similar approach has been proposed in the following paper [7] for your reference.

  • [7] Shumeet Baluja, Ian Fischer. Adversarial Transformation Networks: Learning to Generate Adversarial Examples. CoRR, abs/1703.09387, 2017.


Techniques to Boost Attacks

We have developed such techniques as multi‒target training, multi‒task training, and gradient hint in order to generate more powerful adversarial examples by trying one way after another to devise the architecture of the attack network and the training method. Please refer to our paper for details.

Distributed Training on 128 GPUs Combining Data and Model Parallelism

In order to solve the issue that training takes a significant amount of time as well as to design the large-scale architecture of the attack network, we used ChainerMN [8]  to train it in a distributed manner on 128 GPUs. After considering two factors in particular, which are the need to reduce the batch size due to GPU memory since the attack network is larger than the classifier CNN, and the fact that each worker uses a different classifier network in the aforementioned multi-target training, we have decided to use a combination of standard data parallel and the latest model parallel function of ChainerMN to achieve effective data parallelism.


  • [8] Takuya Akiba, Keisuke Fukuda, Shuji Suzuki: ChainerMN: Scalable Distributed Deep Learning Framework. CoRR abs/1710.11351 (2017)

Generated Images

In our approach, not only the method we used but also generated adversarial examples are very unique.


Original images are in the left column, generated adversarial examples are in the middle, and generated noises are in the right column (i.e. the differences between the original images and adversarial examples). We can observe two distinguishing features from the above.

  • Noise was generated to cancel the fine patterns such as the texture of the panda’s fur, making the image flat and featureless.
  • Jigsaw puzzle-like patterns were added unevenly but effectively by using the original images wisely.

Because of these two features, many image classifiers seemed to classify these adversarial examples as jigsaw puzzles. It is interesting to note that we did not specifically train the attach network to generate these puzzle-like images. We trained it based on objective functions to craft images that could mislead image classifiers. Obviously, the attack network automatically learned that it was effective to generate such jigsaw puzzle-like images.


Finally, we were in the fourth place among about 100 teams. I was personally disappointed by this result as we were aiming for the top place, we had the honor to give a talk at the NIPS’17 workshop since only the top four winners were invited to do so.


Offered by the organizers of the event, we have also co-authored a paper related to the competition with big names in machine learning such as Ian Goodfellow and Samy Bengio. It was a good experience to publish the paper with such great researchers [9]. We have also made the source code available on GitHub [10].

  • [9] Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya  Tokui,  Motoki  Abe.  Adversarial  Attacks and Defences Competition. CoRR, abs/1804.00097, 2018.
  • [10] pfnet‒research/nips17‒adversarial‒attack: Submission to Kaggle NIPS’17 competition on adversarial examples (non‒targeted adversarial attack track) : https://github.com/pfnet‒research/nips17‒adversarial‒attack

While our team was ranked fourth, we had been attracting attention from other participants even before the competition ended due to the run time that was very different in nature from that of other teams. This is attributed to the completely different approach we took. The below table shows a list of top 15 teams with their scores and run time. As you can see, our team’s run time was an order of magnitude faster. This was because in our approach our attack only calculated forward, thus short calculation time, as opposed to repeating forward and backward calculations using gradients of images in almost all approaches taken by other teams.


In fact, according to a PageRank-style analysis conducted by one of the participants, our team got the highest score. This indicates our attack was especially effective against top defense teams. It must have been difficult to defend against our attack which was different in nature from other attacks. For your information, a paper describing the method used by the top team [11] has been accepted by the computer vision international conference CVPR’18 and is scheduled to be presented in the spotlight session.

  • [11] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Xiaolin Hu, Jun Zhu:Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Xiaolin Hu, Jun Zhu: Discovering Adversarial Examples with Momentum. CoRR abs/1710.06081 (2017)


Our efforts to participate in the competition started as part of our company’s 20% projects. Once things got going, we began to think we should concentrate our efforts and aim for the top place. After some coordination, our team got into full gear and began to spend almost all our work hours in this project toward the end. PFN has an atmosphere that encourages its members to participate in competitions such as this as other PFN teams have competed in Amazon Picking Challenges and IT Drug Discovery Contest, for example. I like taking part in this kind of competitions very much and will continue to be engaged in activities like this on a regular basis while wisely choosing competitions that have to do with challenges our company wants to tackle. Quite often, I find the skills honed through these competitions to be useful in handling tasks at critical moments of our company projects such as tuning accuracy or speed.

PFN is looking for engineers and researchers who are enthusiastic about working with us on this kind of activities.

The PFN spirit that we put in the required qualifications – “Qualified applicants must be familiar with all aspects of computer science”

Toru Nishikawa

2018-03-06 12:29:19

*Some of our guidelines for applicants have already been updated to make our true intent conveyed properly, based on the content of this blog.


Hello, this is Nishikawa, CEO of PFN.

I am going to write about one of our hiring requirements.

It is about the wording in the job section of our website used to describe one of the qualifications/requirements for researchers – “Researchers must be seitsu (精通, “familiar with” in Japanese) all aspects of computer science.” We have always had this requirement since the times of PFI because we truly believe in the importance of having a deep knowledge of not just one specific branch but various areas when doing research on computer science.

Take database research for example. It is essential to have thorough knowledge of not only the theory of transaction processing and relational algebra, but also storage and the computer architecture to run a database. Researchers are also required to know about computer networks, now that distributed databases have become common. In today’s deep-learning research, using just one computer could not produce competitive results, thus high-efficient parallel processing is a must. For creating a framework, to understand the structure of computer architecture and language processors is vital. When creating a domain-specific language, without understanding the programming language theory, you will easily end up making a language that looks like an annex added to a building as an afterthought. In reinforcement learning, it is important to refine simulation and rendering technologies.

In short, we live in an age when one who only knows about only one particular area can no longer have an advantage. Furthermore, it is difficult to know in advance which areas of computer science should be fused into generating new technology. In order to realize our mission, that is, to make a breakthrough with cutting-edge technologies, it is extremely important to strive to familiarize oneself with each and every branch of computer science.

This familiarity, a comprehensive knowledge and deep understanding in every field of computer science, is expressed by the Japanese word seitsu mentioned in the first paragraph. The word does not mean you can publish papers in top conferences – that would require not only seitsu but also the ability to conduct new groundbreaking research. (Being able to perform such research is a very important skill that we also need to acquire.) It also does not mean to “know everything” about each field. Someone who declares he knows everything is, rather, not a scientist.

The field of computer science is making rapid progress and we must always pursue its advancement. Sometimes I come across comments making fun of the passage “with all aspects of computer science” on social media, but the message we put into the job requirement has played an important role in shaping PFN culture and so it has remained to date. We will continue to stick to this principle. That said, we also understand the need to come up with an expression that is not misleading. The domain of computer science has been expanding rapidly over the past decade. This trend will no doubt continue to accelerate. New fields of study will emerge after combining many different fields within and outside of the computer science domain. Considering this, we should revise the employment condition in light of the following factors:


・It will become more important to absorb changes and progress that will be made in computer science and become acquainted with new fields that will come out in the future rather than being well-versed in all aspects of computer science at this point. (It is, of course, still necessary to have an extensive knowledge.) We will treat an applicant’s eagerness and passion for learning as more important than his current knowledge.

・We value an applicant’s forward-looking attitude toward deepening an understanding of not only computer science but also other fields such as software engineering, life science and mechanical engineering.

・We welcome not only experts in the artificial intelligence field but also specialists in various areas of expertise to make innovation by combining new technologies.


The criterion has been applied only to researchers, but I believe it is crucial for everyone to be united to open up a path to new technology with no distinction between researchers and engineers because researchers need to have some engineering knowledge while engineers need to make efforts to understand research as well. Therefore, we will make this a requirement for both researchers and engineers.

It is also an important duty for me to create a workplace in which all valuable PFN members can do their best to innovate and create new technology, which I will continue to actively work on.


PFN is looking for talented people with diverse expertise in various fields. If you are interested in working with us, please apply at the following link


MN-1: The GPU cluster behind 15-min ImageNet


2017-11-30 11:00:05

Preferred Networks, Inc. has completed ImageNet training in 15 minutes [1,2]. This is the fastest time to perform a 90-epoch ImageNet training ever achieved. Let me describe the MN-1 cluster used for this accomplishment.

Preferred Networks’ MN-1 cluster started operation this September [3]. It consists of 128 nodes with 8 NVIDIA P100 GPUs each, for 1024 GPUs in total. As each GPU unit has 4.7 TFLOPS in double precision floating point as its theoretical peak, the total theoretical peak capacity is more than 4.7 PFLOPS (including CPUs as well). The nodes are connected with two FDR Infiniband links (56Gbps x 2). PFN has exclusive use of the cluster, which is located in an NTT datacenter.

MN-1 in a Data center

MN-1 Cluster in an NTT Datacenter

On the TOP500 list published in this November, the MN-1 cluster is listed as the 91st most powerful supercomputer, with approx. 1.39PFLOPS maximum performance on the LINPACK benchmark[4]. Compared to traditional supercomputers, MN-1’s computation efficiency (28%) is not high. One of the performance bottlenecks is the interconnect. Unlike typical supercomputers, MN-1 is connected as a thin tree (compared to a fat tree). A group of sixteen nodes is connected to a pair of redundant infiniband switches. In the cluster, we have eight groups, and links between groups are aggregated in a redundant pair of infiniband switches. Thus, if a process needs to communicate with different group, the link between groups becomes a bottleneck, which lowers the LINPACK benchmark score.

Distributed Learning in ChainerMN

However, as stated at the beginning of this article, MN-1 can perform ultra-fast Deep Learning (DL). This is because ChainerMN does not require bottleneck-free communication for DL training. While training, ChainerMN collects and re-distributes parameter updates between all nodes. In the 15-minute trial, we used the ring allreduce algorithm. With the ring allreduce algorithm, nodes communicate with their adjacent node in the ring topology. The accumulation is performed on the first round, and the accumulated parameter update is distributed on the second round. Since we can make a ring without hitting the bottleneck on full duplex network, MN-1 cluster can efficiently finish the ImageNet training in 15 minutes with 1024 GPUs.

Scalability of ChainerMN up to 1024 GPUs

[1] https://arxiv.org/abs/1711.04325

[2] https://www.preferred-networks.jp/en/news/pr20171110

[3] https://www.preferred-networks.jp/en/news/pr20170920

[4] https://www.preferred-networks.jp/en/news/pr20171114