「免费 | 重磅」9月19日首届智能决策论坛即将开幕!(附带10+位作者演讲主题及摘要)「中国科学院自动化研究所」...

关注:决策智能与机器学习,深耕AI脱水干货


CASIA

解锁更多智能之美

演讲主题

9月19日上午

邓小铁

   演讲主题:On the Interplay of AI and Game Theory

    报告摘要:The rapid development of AI has expanded to revolutionize algorithmic designs in computer science and beyond. In this talk, we discuss some of this progress in game theory, economics and other managerial sciences, on the speaker’s own research focus of algorithmic game theory and social/economic mechanism design.

个人简介:邓小铁,北京大学前沿计算研究中心教授,欧洲科学院外籍院士,ACM Fellow,IEEE Fellow。1989年于斯坦福大学获得博士学位,先后担任中国科学院系统科学研究所助理研究员和加拿大约克大学计算机科学与工程系助理教授。主要科研方向为算法博弈论、互联网经济、在线算法、并行计算及区块链。近期的研究兴趣包括算法博弈论研究、均衡和机制设计、互联网广告系统、云计算定价及资源分配、社交网络行为分析及推荐系统,以及交通及物流网络算法。作为项目负责人,他曾承担加拿大、香港、英国,及国家基金委等十几项科研项目,并在多种国际期刊担任编委。他曾担任多个国际学术会议主席,并发起网络经济学国际三大洲(亚洲欧洲美洲)循环举办的全球性会议互联网及网络经济学术会议The Conference on Web and Internet Economics (WINE)。他在算法博弈论方面及网络搜索的研究成果,被国际同行广泛引用,发表论文200余篇,被引用数千次;多次做国际学术会议特邀报告;曾获得IEEE理论计算机学术会议FOCS的最佳论文奖;其成果“关于图与组合优化的若⼲经典问题的研究”获2015年度⾼等学校科学研究优秀成果奖 (⾃然科学)二等奖(排名第⼆)。应用方面获得多项美国专利及国家专利,曾担任主要互联网公司机制设计顾问。

张崇洁

  • 演讲主题:

Learning to Collaborate in Complex Environments

  • 报告摘要:

Many real-world AI problems are naturally modelled as cooperative multi-agent systems, where a group of agents work together to achieve a common goal. In this talk, I will discuss some key challenges in designing reinforcement learning methods for agents to efficiently learn to collaborate in complex environments, including credit assignment, scalability, uncertainty, and heterogeneity. I also present several approaches proposed in my group that address these challenges and enable efficient learning for multi-agent collaboration. 

  • 个人简介:

张崇洁,清华大学交叉信息科学院助理教授,博士生导师。2011年在美国麻省大学阿默斯特分校获计算机科学博士学位,而后在美国麻省理工学院从事博士后研究。研究兴趣包括人工智能、深度强化学习、多智能体系统以及机器人学,曾在AAAI、NeurIPS、IJCAI、AAMAS等国际会议上发表多篇文章。

安波

  • 演讲主题:

当人工智能遇见博弈论

  • 报告摘要:

2017年德州扑克的人机大战是AlphaGo的围棋大战之后最火爆的话题,人类顶尖棋手集体被卡内基梅隆大学Libratus(中文名冷扑大师)系统碾压。Libratus的成功与近几年来最火的深度学习无关,其成功完全归功于大规模博弈计算技术在最近十年来的进展。过去几年,博弈论在安全领域的资源分配及调度方面的理论——安全博弈论,逐渐建立并且在若干领域得到成功应用,包括机场安保、空中警察调度、海岸警卫队巡逻调度、野生动物保护,产生了很大的反响,如美国国会听证数次提到了相关的研究成果及应用。安全博弈论的成功也归功于大规模博弈问题求解技术的进展。报告将讨论大规模博弈计算所面临的挑战、最近几年的主要进展及成功应用、以及强化学习在大规模博弈上的应用与挑战。  

  • 个人简介:

安波,新加坡南洋理工大学校长委员会讲席副教授,JAIR编委会成员,JAAMAS, IEEE Intelligent Systems和ACM TIST的副主编,AAMAS'20的程序委员会主席。2011年在美国麻省大学Amherst分校获计算机科学博士学位。主要研究领域包括人工智能、多智能体系统、算法博弈论、强化学习及优化。有100余篇论文发表在人工智能领域的国际顶级会议AAMAS、IJCAI、AAAI、ICAPS、KDD、UAI、EC、WWW、ICLR、NeurIPS、及ICML。曾获2010年IFAAMAS杰出博士论文奖,2011年美国海岸警卫队的卓越运营奖,2012年AAMAS最佳应用论文奖,2012年INFORMS Daniel H. Wagner杰出运筹学应用奖,2016年IAAI创新应用论文奖,2017年微软合作AI挑战赛的冠军,以及2018年南洋青年研究奖等荣誉。曾受邀在IJCAI'17上做Early Career Spotlight talk。

9月19日下午

王志坚

  • 演讲主题:

博弈的动力学结构

  • 报告摘要:

博弈动力学研究博弈运动过程的规律,其中,典型的问题是博弈纳什均衡如何实现、实现过程中的规律是什么等。它的理论和实验角度分别属于博弈论和实验经济学的分支。博弈动力学理论及其博弈实验研究分别经历了50年左右的历史。过去的十年间,由于测量技术的进步使得动力学结构在实验中得以明确。这使得理论与实验互为验证的进程开始成为可能,进而该领域也开始具备了作为一个科学领域的完整性和生命力。在社会态空间上的时间序列中可以得到的博弈动力学结构(如分布构型、概率流、速度场、加速度场、转动、相位与波幅等等)可观测量集,可以将几种类别的博弈论理论结果与实验结果进行连接,以期验证与发展博弈理论的科学性。本报告以动力学结构可观测量集为抓手,通过最小德州扑克博弈中的“策略空间塌缩”过程的理论与真人行为博弈实验为例,介绍多智能体博弈中的运动规律研究的动机、动态与问题。

  • 个人简介:

王志坚, 浙江大学实验社会科学实验室研究员、博导。师从李文铸先生,于1995年获得浙江大学理论物理专业博士学位。主要研究内容包括演化博弈论、实验经济学、非平衡态统计物理、基于个体的仿真方法等。他的主要贡献是在实验中发现混合纳什均衡中的动力学结构。其相关系列成果已发表在《自然通讯》等国际著名期刊,为国内外数百家媒体包括自然哲学、数学、心理学和物理学在内的专业机构的推荐、报道或转载,成为欧美和国内高校博弈论、微观经济学、优化算法理论和计算机科学领域本科教学教程的参考内容,并入选《麻省理工科技评论》2014年度最佳成果,以及BBC“2014年度科技新闻亮点”。

陆品燕

  • 演讲主题:

Optimal Competitive Auction

  • 报告摘要:

We study the design of truthful auctions for selling identical items in unlimited supply (e.g., digital goods) to n unit demand buyers. This classic problem stands out from profit-maximizing auction design literature as it requires no probabilistic assumptions on buyers' valuations and employs the framework of competitive analysis. Our objective is to optimize the worst-case performance of an auction, measured by the ratio between a given benchmark and revenue generated by the auction. 

We establish a sufficient and necessary condition that characterizes competitive ratios for all monotone benchmarks. The characterization identifies the worst-case distribution of instances and reveals intrinsic relations between competitive ratios and benchmarks in the competitive analysis. With the characterization at hand, we show optimal competitive auctions for two natural benchmarks. 

The most well-studied benchmark F2 measures the envy-free optimal revenue where at least two buyers win. Goldberg et al. showed a sequence of lower bounds on the competitive ratio for each number of buyers n. They conjectured that all these bounds are tight. We show that optimal competitive auctions match these bounds. Thus, we confirm the conjecture and settle a central open problem in the design of digital goods auctions.

  • 个人简介:

陆品燕,上海财经大学信息管理与工程学院教授,理论计算机科学研究中心主任。2009年1月于清华大学计算机系获博士学位,后加入微软亚洲研究院,历任理论组副研究员,研究员,主管研究员。2015年12月全职加盟上海财经大学,领衔组建理论计算机科学研究中心(ITCS)。经过四年多时间的建设,该研究中心在CSRankings上算法与复杂性方向已经排到亚洲第一名、全球前十。他的主要研究方向是理论计算机以及学科交叉内容。曾在理论计算机的三大会议STOC/FOCS/SODA共发表论文27篇,荣获ICALP2007、FAW2010、ISAAC2010 等重要国际会议最佳论文奖。他曾获中国计算机学会青年科学家(2014),ACM杰出科学家奖(2019),和第八届世界华人数学家大会ICCM数学奖(原晨兴数学奖)银奖(2019)等荣誉,并担任FAW-AAIM 2012、WINE 2017、FAW 2018、ISAAC 2019等国际会议程序委员会联合主席,以及STOC,FOCS,SODA等顶级国际会议的程序委员会委员。

柯良军

  • 演讲主题:

Multi-agent Reinforcement Learning for Large-scale UAV Swarm Cooperative Attack-Defense Confrontation

  • 报告摘要:

This talk considers the problem of Large-scale unmanned aerial vehicle (UAV) swarm attack-defense confrontation in a three-dimensional environment. We propose a new Multi-Agent Reinforcement Learning (MARL) algorithm - Cooperative Deep Deterministic Policy Gradient (CODDPG), which has the following characteristics: It adopts the Actor-Critic framework, and the agents share a common network. In addition, the cooperation between UAVs is considered. To make it suitable for large-scale problems, the Mean Field Theory (MFT) is adopted and the new local state representation and reward allocation method are proposed. Moreover, CODDPG has a new network structure, which considers the executable actions in the current state. Finally, the algorithm uses a framework of centralized training and decentralized execution. In this way, each agent can only rely on its local observation to make decisions. To study the proposed algorithm, a large-scale UAV swarm confrontation platform considering flight constraints and real UAV environment is designed. The experimental results show that CODDPG has better performance than several other mainstream MARL algorithms in the simulation platform.

  • 个人简介:

柯良军,西安交通大学自动化学院教授,博士生导师,IEEE member,西安交通大学智能感知与决策研究中心主任,中国仿真学会智能仿真优化与调度专委会委员,中国自动化学会无人飞行器自主控制专委会委员,IEEE member,《控制理论与应用》编委,出版著作《强化学习》、《蚁群智能优化方法及其应用》。

汪军

  • 演讲主题:

Multi-agent Learning

  • 报告摘要:

Multi-agent learning arises in a variety of domains where intelligent agents interact not only with the (unknown) environment but also with each other. It has an increasing number of applications ranging from controlling a group of autonomous vehicles/robots/drones to coordinating collaborative bots in production lines, optimizing distributed sensor networks/traffic, and machine bidding in competitive e-commerce and financial markets, just to name a few.

Yet, the non-stationary nature calls for new theory that brings interactions into the learning process. In this talk, I shall provide an up-to-date introduction on the theory and methods of multi-agent AI, with a focus on competition, collaboration, and communications among intelligent agents. The studies in both game theory and machine learning will be examined in a unified treatment. I shall also sample our recent work on the subject including mean-field multiagent reinforcement learning, stochastic potential games, and solution concepts beyond Nash-equilibrium.

  • 个人简介:

      汪军,伦敦大学学院(UCL)计算机系教授,阿兰·图灵研究所Turing Fellow,华为诺亚方舟实验室决策推理首席顾问。主要研究智能信息系统,包括机器学习、强化学习、多智能体,数据挖掘、计算广告学、推荐系统等。已发表了120多篇学术论文,出版两本学术专著,多次获得最佳论文奖。

张伟

  • 报告主题:

Bidirectional Model-Based Policy Optimization

  • 报告摘要:

Model-based reinforcement learning approaches leverage a forward dynamics model to support planning and decision making, which, however, may fail catastrophically if the model is inaccurate. Although there are several existing methods dedicated to combating the model error, the potential of the single forward model is still limited. In this paper, we propose to additionally construct a backward dynamics model to reduce the reliance on accuracy in forward model predictions. We develop a novel method, called Bidirectional Model-based Policy Optimization (BMPO) to utilize both the forward model and backward model to generate short branched rollouts for policy optimization. Furthermore, we theoretically derive a tighter bound of return discrepancy, which shows the superiority of BMPO against the one using merely the forward model. Extensive experiments demonstrate that BMPO outperforms state-of-the-art model-based methods in terms of sample efficiency and asymptotic performance.

  • 个人简介:

张伟楠,上海交通大学电子信息与电气工程学院约翰·霍普克罗夫特计算机科学中心长聘教轨副教授。2011年在上海交通大学计算机系ACM班获得学士学位,2016年在伦敦大学学院计算机系获得博士学位,曾在微软、Google和DERI实习。科研领域包括强化学习、深度学习、数据科学、知识图谱及其互联网个性化服务、游戏智能等场景中的应用,相关的研究成果在国际会议和期刊上发表超过80篇学术论文。曾在2011年KDD-Cup用户个性化推荐大赛获得全球季军,在2013年全球大数据实时竞价展示广告出价算法大赛获得最终冠军,2017年获得上海ACM新星奖和ACM国际信息检索会议SIGIR的最佳论文提名奖;2018年获华为最佳合作贡献奖和首届达摩院青橙奖;2019年获ACM SIGKDD深度学习实践研讨会最佳论文奖。长期担任ICML、NeurIPS、ICLR、KDD、AAAI、IJCAI、SIGIR等机器学习和数据科学的会议(高级)程序委员和JMLR、TOIS、TKDE、TIST等期刊的评审以及FCS的青年编委。

章宗长

  • 报告主题:

迁移强化学习研究进展

  • 报告摘要:

迁移强化学习是近年来强化学习领域的研究热点。其核心思想是将迁移学习中的学习方式应用在强化学习的学习过程中,从而帮助强化学习在学习目标任务时,可以借鉴与其相似任务上的知识,以提高在目标任务上的学习效率。在本次报告中,我将介绍我们近期在迁移强化学习方面做的三个工作。一是用于非稳态马尔科夫博弈任务的深度贝叶斯策略重用方法,其特点是结合基于深度神经网络表示的值函数近似,使用基于贝叶斯规则的对手建模来推断其他Agent的策略,并使用蒸馏策略网络来取得高效的在线策略学习和重用。二是基于策略重用思想的策略迁移框架,其特点是由Agent模块和Option模块构成:Option模块负责选择合适的源策略,Agent模块利用来自合适源策略的知识来直接优化目标策略。三是基于鲁棒环境推理的策略自适应方法,其特点是利用变分推理得到的环境特征知识来直接优化目标策略。

  • 个人简介:

章宗长,南京大学人工智能学院副教授,硕士生导师,计算机软件新技术国家重点实验室成员,中国计算机学会人工智能与模式识别专委会委员。2012年博士毕业于中国科学技术大学,先后在罗格斯大学、新加坡国立大学、斯坦福大学进行科学研究,2014—2019年在苏州大学工作。研究兴趣包括强化学习、概率规划和模仿学习。在主流国际学术会议(AAAI、ICML、IJCAI、NeurIPS、AAMAS、ICAPS、UAI等)和重要国内外学术期刊(《计算机学报》、《软件学报》、《FCS》、《JCST》等)发表论文30多篇。出版译著2部,撰写专著章节2章,申请国家发明专利15项,已授权6项,有3项转让给企业。共同发起了亚洲强化学习系列研讨会(AWRL)。担任SCI期刊《FCS》的青年副编辑,AAAI、IJCAI等人工智能领域国际顶级会议的高级程序委员,《JAIR》、《中国科学》等10余个国内外期刊和NeurIPS、ICML等10余个国际会议的论文评审专家,ACML、PRICAI等国际会议中强化学习专题研讨会的联合主席。近年来主持国家级科研项目2项、省级科研项目2项,与多家企业有科研合作。

余超

  • 报告主题:

基于强化学习的非完全信息博弈

  • 报告摘要:

非完全信息博弈,即博弈的参与人无法掌握博弈的全部信息,在现实场景中具有广泛的应用,如政治博弈、军事对抗、商业竞价等。本报告将简单回顾当前非完全信息博弈研究的主要方法和应用成果,并重点从不确定性对手建模、融合CFR的策略搜索等角度介绍基于强化学习的非完全信息博弈最优策略求解问题。

  • 个人简介:

余超,中山大学数据科学与计算机学院副教授,国家“香江学者”,大连市高层次创新人才。2007年本科毕业于华中科技大学电信系,2013年博士毕业于澳大利亚伍伦贡大学计算机系,2014年加入大连理工大学计算机学院,任讲师,2016年破格晋升为副教授,2019年12月以“百人计划”身份加入中山大学。主要研究方向有:强化学习、智能体与多智能体系统理论,及其在智能集群、机器人与多机器人系统、自动驾驶与智能网联汽车、智慧医疗中的应用。先后在IEEE TNNLS、IEEE T Cyber.、IEEE TVT、ACM T.等国际期刊以及会议上发表学术论文70余篇。主持国家自然科学基金、装发预研、军科委创新项目等项目10余项。

温颖

  • 报告主题:

Deep Multi-agent Reinforcement Learning: Direct and Game-Theoretical Approaches

  • 报告摘要:

Despite the recent success of applying deep reinforcement learning (RL) algorithms on various problems in the single-agent case, it is still challenging to transfer these methods into the multi-agent RL context. The reason is that independent learning will ignore others in the environment, which breaks the theoretical guarantee of convergence. At first, this talk gives an overview of deep multi-agent reinforcement learning (MARL) concepts and challenges. Then, we will cover the methods of direct RL extensions in the multi-agent context, including independent learner (IL) and various centralized critic methods. Finally, we will also discuss some recent trends in applying game-theoretical analysis in deep MARL, such as policy-space response oracles (PSRO) and multi-agent trust region learning (MATRL).

  • 个人简介:

温颖,上海交通大学约翰·霍普克罗夫特研究中心助理教授。2015年从北京邮电大学和伦敦大学玛丽女王学院获得一等荣誉学士学位,2016年从伦敦大学学院获得杰出荣誉硕士学位,近日从伦敦大学学院获得博士学位。科研领域包括强化学习,多智能体学习,博弈论及其在现实场景的应用。曾在ICML、ICLR、IJCAI、AAMAS等国际一流会议发表多篇学术论文。

赵登吉

  • 报告主题:

Mechanism Design Powered by Social Interactions

  • 报告摘要:

This talk introduces a novel mechanism design theory for information propagation on a social network such that truthful information will be fully propagated or collected via the network. The goal is to design resource or task allocation mechanisms such that existing participants are incentivized to invite more participants via their (private) social connections, which enables better resource or task allocations than traditional mechanisms. We will study both cooperative and non-cooperative settings such as auctions and coalitional games.  

  • 个人简介:

赵登吉,上海科技大学研究员,博士生导师,上海科技大学-溢达集团联合实验室主任。2009年获德国德雷斯顿工业大学和西班牙马德里理工双理学硕士学位(计算逻辑方向),2012年获澳大利亚西悉尼大学和法国图卢兹大学计算机双博士学位,并获图卢兹大学2013年最佳博士论文奖 。2013至2016年作为博士后先后师从亚洲首位AAAI Fellow Makoto Yokoo教授,和英国计算机领域唯一的皇家教授(Regius Professor) AAAI/IEEE Fellow Nick Jennings。其主要研究兴趣是人工智能和算法博弈论,目前关心的主要应用是共享经济如滴滴或Airbnb。在人工智能多个顶级会议和杂志如AAAI、IJCAI、ECAI、AAMAS发表了多篇文章,同时持续担任这些顶级会议的评委 (Program Committee)和相关杂志的评审。

9月20日下午

刘正阳

  • 报告主题:

On the Complexity of Sequential Posted Pricing 

  • 报告摘要:

In this talk, we will study the well-known sequential posted pricing scheme with one item, under the bayesian setting. Each agent comes in to the auction market sequentially, and is offered a take-it-or-leave-it price. The goal of the auctioneer is to maximize her expected revenue.

We show that finding an optimal sequential posted pricing is NP-complete even when the value distributions of buyers are of support size three. For the upper bound, we introduce polynomial-time algorithms when the distributions are of support size at most two, or their values are drawn from any identical distributions.

This is joint work with Tao Xiao and Wenhan Huang.

  • 个人简介:

刘正阳,北京理工大学计算机学院助理教授。分别于2013年和2018年在上海交通大学获得本科与博士学位。研究方向包括算法博弈论与强化学习。相关论文发表在理论计算机顶级会议STOC、人工智能顶级会议AAAI、AAMAS以及计算复杂性顶级会议CCC上。曾担任IJCAI 2020, AAAI 2020, FAW 2020等国际会议程序委员会成员,以及SIAM Journal on Computing, Theoretical Computer Science, Algorithmica, SAGT’17, SODA’21’16, ICALP’15, ISAAC’14等国际期刊和会议的审稿人。

杜雅丽

  • 报告主题:

Agent Learning in the Emergence of Complex World

  • 报告摘要:

In recent years, we have witnessed a great success of AI in many applications, including image classification, recommendation systems, etc. This success has shared a common paradigm in principle, learning from static datasets with inputs and desired outputs. Nowadays, we are experiencing a paradigm shift.  Instead of learning the knowledge from static datasets, we are learning through the feedback of our knowledge. Especially since machine learning models are deployed in the real-world, these models will have impacts on each other, turning their decision making into a multi-agent problem. Therefore, agent learning in a complex world is a fundamental problem for the next generation of AI to empower various multi-agent environments.  In this talk, I will mainly present two of my research thrusts on learning with a varied number of agents that often happens in game AI, unmanned vehicle control, etc., and learning diverse cooperation behaviours to avoid lazy free-riders. Our novel methods achieve new state-of-the-arts on the popular real-time strategy game StarCraft II, which has recently emerged as a challenging RL benchmark task with high stochasticity, high-dimensional inputs and partial observability. At the end, I will conclude with  discussions of some future directions.

  • 个人简介:

       杜雅丽,伦敦大学学院的博士后研究员,华为伦敦研究实验室的访问研究员。2014年获西北工业大学学士学位,2019年获悉尼科技大学博士学位。主要研究兴趣为机器学习、强化学习及其在游戏AI、推荐检索和传统控制问题中的应用。目前主要从事多智能体算法的设计和研究,包括灵活控制任意数量的智能体、奖励多样性行为、多智能体信用分配、多智能体交互结构学习和学习模型的鲁棒性等。相关研究成果已广泛发表在ICML、NeurIPS、IJCA、ACM MM、IEEE TMM等著名刊物。

钟方威

  • 报告主题:

Learning Vision-Based Agent Under Multi-agent Game 

  • 报告摘要:

Building vision-based agents to intelligently interact with the world and accomplish tasks is fundamental and challenging. In this talk, I will report our recent efforts on how to use multi-agent games to improve the robustness, sample efficiency and generalization of vision-based agents. In the first work (ICLR2019, TPAMI), we introduce a competitive multi-agent game for active object tracking. In the game, the tracker and target, viewed as two learnable agents, are opponents and can mutually enhance each other during the competition: i.e., the tracker intends to lockup the target, while the target tries to escape from the tracker. In the second work (AAAI 2020), we propose a pose-assisted multi-agent collaboration system for multi-camera single-object tracking. In this system, agents exploit the intrinsic relationship among camera poses to cooperatively enhance the tracking performance in challenging cases, such as heavily occlusion.

  • 个人简介:

钟方威,北京大学信息科学与技术学院计算机应用专业在读博士生,师从王亦洲教授。主要研究方向是自主学习及其在机器人视觉中的应用。已有8篇论文在TPMAI、ICLR、ICML、CVPR、AAAI等人工智能领域知名国际会议及期刊发表。担任NeurIPS、ICML、ICLR、CVPR、ICCV、ECCV、AAAI等多个顶级国际会议程序委员和审稿人。

田政

  • 报告主题:

Learning to Communicate Implicity by Actions

  • 报告摘要:

In collaborative multi-agent systems, communication is essential for agents to learn to behave as a collective rather than a collection of individuals. This is particularly important in the imperfect-information setting, where private information becomes crucial to success. In such cases, efficient communication protocols between agents are needed for private information exchange, coordinated joint-action exploration, and true world-state inference. However, environments where explicit communication is difficult or prohibited are common. In this talk, I will present a paper from our group about how to train agents to communicate by actions.

  • 个人简介:

田政,伦敦大学学院汪军教授团队博士生。UCL团队研发基于树搜索与神经网络相结合的强化学习框架ExIt的主要成员之一,该项成果与Deepmind AlphaZero属共同独立研发。现主要研究多智能体强化学习问题。

阎翔

  • 报告主题:

Latent Dirichlet Allocation for Internet Price War

  • 报告摘要:

Current Internet market makers are facing an intensely competitive environment, where personalized price reductions or discounted coupons are provided by their peers to attract more customers. Much investment is spent to catch up with each other's competitors but participants in such a price cut war are often incapable of winning due to their lack of information about others' strategies or customers' preference.  We formalize the problem as a stochastic game with imperfect and incomplete information and develop a variant of Latent Dirichlet Allocation (LDA) to infer latent variables under the current market environment, which represents preferences of customers and strategies of competitors. Tests on simulated experiments and an open dataset for real data show that, by subsuming all available market information of the market maker's competitors, our model exhibits a significant improvement for understanding the market environment and finding the best response strategies in the Internet price war. Our work marks the first successful learning method to infer latent information in the environment of price war by the LDA modeling, and sets an example for related competitive applications to follow.

  • 个人简介:

阎翔,上海交通大学在读博士生,期间先后作为科研助理访问香港科技大学和蚂蚁金服,2019年9月受国家留学基金委资助加入美国哈佛大学担任访问学者。主要研究方向为算法博弈论、机制设计、计算经济学以及多智能体强化学习。曾在IJCAI、AAAI、WINE等多个国际学术会议上发表论文,其成果曾获“第八届中国博弈论及其应用国际会议”最佳青年论文奖。曾多次担任NuerIPS、AAAI、IPDPS等国际学术会议的审稿人。

更多请查看石墨文档:

https://shimo.im/docs/8dgwjW9CYDV8jg8Y/read

历史精华好文

  • 专辑1:AI产品/工程落地

  • 专辑2:AI核心算法

  • 专辑3:AI课程/资源/数据

交流合作

请加微信号:yan_kylin_phenix注明姓名+单位+从业方向+地点,非诚勿扰。

你可能感兴趣的:(算法,人工智能,博弈论,编程语言,强化学习)