On the morning of July 20th, Intel and MOS AI strategic cooperation conference, with the theme of “Drive AI, Calculate Future”, was held in Beijing Global Trade Center. Greg Pearson, Intel SVP and general manager of Sales & Market BU, Zhang Jinmao, Meituan DianPing chief scientist and TC executive chairman, Li Shang, MOS general manager, and Liang Yali, Intel Industry Solution Group general manager of Chinese district, attended this conference as honored guests. Besides, as the partners of Intel and MOS, Huawei, Sogou, Inspur, SpeedClouds, CTAccel, Rokid and Sequoia also took part in the event.
According to the agreement they signed, Intel will provide the hardware, software and relevant technical expert resources for product tests, offer necessary technical expert support to help Meituan public cloud platform in developing and innovating, and assist MOS create the AI/ Machine Learning public cloud platform; MOS will provide software development and relevant technologies’ expert resources and give timely feedback of its development and testing progress and results, so as to ensure both sides can work together smoothly. From now on, Intel and MOS will further deepen their strategic cooperation, accelerate AI development, and open the next-gen, intelligent Internet.
MOS is a public cloud computing platform under Meituan DianPing, aiming at creating secure, stable and reliable cloud services. With the cooperation with Intel, MOS launches a series of technical products based on Intel Xeon Phi.
FPGA cloud servers can easily obtain and deploy FPGA computing instances in minutes, which include strong acceleration and excellent programmable ability. Even better, FPGA cloud servers has much lower costs than the GPU cloud servers with the same performance, because FPGA cloud server does not require purchasing FPGA board card, but charges as you go, to lower the input costs.
Intel cloud servers with Xeon core processors can run x86 applications, just like CPU, and have the same coding environment, tools and languages to Xeon processors, integrate 16GB super-fast, on-chip storage, and up to 490GB/s bandwidth, about 4 to 5 times of DDR4 bandwidth. Additionally, it has excellent system scalability, while it has realized less network delay and lower power consumption.
Deep learning platform is a task training SaaS platform based on Tensorflow. It support native Tensorflow program running; via web interface, it can create, manage and monitor tasks, to make task training more convenient, management more effective, to get rid of the constraint of calculating words and meet training demands.
Greg Person said, artificial intelligence is the hottest area this year, which is based on the fast growing of big data in recent years. But big data is not only to have the data, but also to understand and filter the data, and then it can be put into the real scenarios to meet different requirements and demands, and meet customers’ personal needs. AI should be driven by customers’ needs, and Meituan just has great amount of consumer data, which is correspond to Intel’s technologies’ application. That’s why Intel is very glad to collaborate with Meituan.
Zhang jinmao said, AI now is moving into a long-term flourishing phase. Meituan put forward three strategies, and one of them is to use different new technologies such as AI which is changing various life scenarios, and its goal is “Eat Better, Live Better”. AI technologies can help it in improving efficiency and promoting risk control ability in operation. As an O2O platform, Meituan has a large group of salesmen and huge big data service capacity. The collaboration with Intel is win-win.