【演講】2019/11/19 (二) @工四816 (智易空間),邀請到Prof. Geoffrey Li(Georgia Tech, USA)與Prof. Li-Chun Wang(NCTU, Taiwan) 演講「Deep Learning based Wireless Resource Allocation/Deep Learning in Physical Layer Communications/Machine Learning Interference Management」
IBM中心特別邀請到Prof. Geoffrey Li(Georgia Tech, USA)與Prof. Li-Chun Wang(NCTU, Taiwan)前來為我們演講,歡迎有興趣的老師與同學報名參加!
演講標題:Deep Learning based Wireless Resource Allocation/Deep Learning in Physical Layer Communications/Machine Learning Interference Management
演 講 者:Prof. Geoffrey Li與Prof. Li-Chun Wang
時 間:2019/11/19(二) 9:00 ~ 12:00
地 點:交大工程四館816 (智易空間)
活動報名網址:https://forms.gle/vUr3kYBDB2vvKtca6
報名方式:
費用:(費用含講義、午餐及茶水)
1.費用:(1) 校內學生免費,校外學生300元/人 (2) 業界人士與老師1500/人
2.人數:60人,依完成報名順序錄取(完成繳費者始完成報名程序)
※報名及繳費方式:
1.報名:請至報名網址填寫資料
2.繳費:
(1)親至交大工程四館813室完成繳費(前來繳費者請先致電)
(2)匯款資訊如下:
戶名: 曾紫玲(國泰世華銀行 竹科分行013)
帳號: 075506235774 (國泰世華銀行 竹科分行013)
匯款後請提供姓名、匯款時間以及匯款帳號後五碼以便對帳
※將於上課日發放課程繳費領據
聯絡方式:曾紫玲 Tel:03-5712121分機54599 Email:tzuling@nctu.edu.tw
Abstract:
1.Deep Learning based Wireless Resource Allocation
【Abstract】
Judicious resource allocation is critical to mitigating interference, improving network efficiency, and ultimately optimizing wireless network performance. The traditional wisdom is to explicitly formulate resource allocation as an optimization problem and then exploit mathematical programming to solve it to a certain level of optimality. However, as wireless networks become increasingly diverse and complex, such as high-mobility vehicular networks, the current design methodologies face significant challenges and thus call for rethinking of the traditional design philosophy. Meanwhile, deep learning represents a promising alternative due to its remarkable power to leverage data for problem solving. In this talk, I will present our research progress in deep learning based wireless resource allocation. Deep learning can help solve optimization problems for resource allocation or can be directly used for resource allocation. We will first present our research results in using deep learning to solve linear sum assignment problems (LSAP) and reduce the complexity of mixed integer non-linear programming (MINLP), and introduce graph embedding for wireless link scheduling. We will then discuss how to use deep reinforcement learning directly for wireless resource allocation with application in vehicular networks.
2.Deep Learning in Physical Layer Communications
【Abstract】
It has been demonstrated recently that deep learning (DL) has great potentials to break the bottleneck of the conventional communication systems. In this talk, we present our recent work in DL in physical layer communications. DL can improve the performance of each individual (traditional) block in the conventional communication systems or jointly optimize the whole transmitter or receiver. Therefore, we can categorize the applications of DL in physical layer communications into with and without block processing structures. For DL based communication systems with block structures, we present joint channel estimation and signal detection based on a fully connected deep neural network, model-drive DL for signal detection, and some experimental results. For those without block structures, we provide our recent endeavors in developing end-to-end learning communication systems with the help of deep reinforcement learning (DRL) and generative adversarial net (GAN). At the end of the talk, we provide some potential research topics in the area.
3.Machine Learning Interference Management
【Abstract】
In this talk, we discuss how machine learning algorithms can address the performance issues of high-capacity ultra-dense small cells in an environment with dynamical traffic patterns and time-varying channel conditions. We introduce a bi adaptive self-organizing network (Bi-SON) to exploit the power of data-driven resource management in ultra-dense small cells (UDSC). On top of the Bi-SON framework, we further develop an affinity propagation unsupervised learning algorithm to improve energy efficiency and reduce interference of the operator deployed and the plug-and-play small cells, respectively. Finally, we discuss the opportunities and challenges of reinforcement learning and deep reinforcement learning (DRL) in more decentralized, ad-hoc, and autonomous modern networks, such as Internet of things (IoT), vehicle -to-vehicle networks, and unmanned aerial vehicle (UAV) networks.
Bio:
Dr. Geoffrey Li is a Professor with the School of Electrical and Computer Engineering at Georgia Institute of Technology. He was with AT&T Labs – Research for five years before joining Georgia Tech in 2000. His general research interests include statistical signal processing and machine learning for wireless communications. In these areas, he has published around 500 referred journal and conference papers in addition to over 40 granted patents. His publications have cited by 37,000 times and he has been listed as the World’s Most Influential Scientific Mind, also known as a Highly-Cited Researcher, by Thomson Reuters almost every year since 2001. He has been an IEEE Fellow since 2006. He received 2010 IEEE ComSoc Stephen O. Rice Prize Paper Award, 2013 IEEE VTS James Evans Avant Garde Award, 2014 IEEE VTS Jack Neubauer Memorial Award, 2017 IEEE ComSoc Award for Advances in Communication, and 2017 IEEE SPS Donald G. Fink Overview Paper Award. He also won the 2015 Distinguished Faculty Achievement Award from the School of Electrical and Computer Engineering, Georgia Tech.
Li-Chun Wang (M'96 -- SM'06 -- F'11) received Ph. D. degree from the Georgia Institute of Technology, Atlanta, in 1996. From 1996 to 2000, he was with AT&T Laboratories, where he was a Senior Technical Staff Member in the Wireless Communications Research Department. Currently, he is the Chair Professor of the Department of Electrical and Computer Engineering and the Director of Big Data Research Center of of National Chiao Tung University in Taiwan. Dr. Wang was elected to the IEEE Fellow in 2011 for his contributions to cellular architectures and radio resource management in wireless networks. He was the co-recipients of IEEE Communications Society Asia-Pacific Board Best Award (2015), Y. Z. Hsu Scientific Paper Award (2013), and IEEE Jack Neubauer Best Paper Award (1997). He won the Distinguished Research Award of Ministry of Science and Technology in Taiwan twice (2012 and 2016). He is currently the associate editor of IEEE Transaction on Cognitive Communications and Networks. His current research interests are in the areas of software-defined mobile networks, heterogeneous networks, and data-driven intelligent wireless communications. He holds 23 US patents, and have published over 300 journal and conference papers, and co-edited a book, “Key Technologies for 5G Wireless Systems,” (Cambridge University Press 2017).
ibm z systems 在 國立陽明交通大學電子工程學系及電子研究所 Facebook 八卦
【Talk 2016.3.29】Title:Brain-inspired Computing : from Hardwares to Applications
.
Invite you all to come.
.
Time: 13:10, Mar. 29, 2016
Venue: ED220 ( Engineering Building 4, National Chiao Tung University)
.
Speaker: Dr. Gi-Joon Nam, IBM T. J. Watson Research Center
.
.
Abstract:
In recent years, there have been significant amount of interests in a new quest in building a brain-inspired machine. In contrast to the prevailing von Neumann architecture that most traditional computers are based on, a brain-inspired machine mimics the operations of the brain’s neuron and synaptic system to replicate the human brain’s talent for learning new tasks. In this talk, we will provide system perspectives of a brain-inspired computer by presenting circuit implementations, software programming models and promising applications.
.
.
Bio:
Gi-Joon Nam is a research staff member and manager at the IBM T. J. Watson Research Center. He currently manages the Physical Design department. His group is conducting research on various design automation techniques for high performance computing IBM products such as IBM’s P/Z microprocessors and server chips. Prior to this, he has managed the Optimized Analytics System department at the IBM Austin Research Lab working on the workload optimized systems for big data applications. Gi-Joon has been involved with leading-edge high performance VLSI designs for 15+ years, starting from 130 nm technology nodes to sub-20 nm technologies.
ibm z systems 在 台灣物聯網實驗室 IOT Labs Facebook 八卦
IBM推出區塊鏈雲端服務,並貢獻4.4萬行區塊鏈程式碼
新的區塊鏈即服務建立在IBM雲平台Bluemix之上,可讓開發人員建立與管理區塊鏈網路來執行分散式分類帳(ledger)應用,並透過Bluemix上的DevOps開發、部署、執行與管理區塊鏈應用。
文/林妍溱 | 2016-02-17發表
IBM在倫敦、紐約、新加坡及東京設立IBM Garages,協助企業設計、開發區塊鏈相關應用。
IBM宣佈多項和區塊鏈(blockchain)有關的服務和計畫,包括針對開發人員設計的區塊鏈即服務(Blockchain as a servive),以及為Linux基金會的Hyperledger專案貢獻近4.4萬行程式碼等。
新的區塊鏈即服務是建立在IBM雲平台Bluemix之上的新服務,可讓開發人員建立與管理區塊鏈網路來執行分散式分類帳(ledger)應用,他們可建立數位資產及相應的業務邏輯,並傳送給經許可的區塊鏈測試網路中的成員。
Bluemix上提供的DevOps工具可用於開發、部署、執行與管理IBM雲平台區塊鏈應用。此外,這些應用也可部署於IBM z Systems 上,並可透過API存取在分散式伺服器及z Systems上的現有交易,完成支付、結算、供應鏈及其他業務流程。
IBM並指出,RFID標籤或條碼掃瞄器等物聯網裝置將可連結IBM 區塊鏈分類帳,使這些裝置產生的資訊可用於更新或驗證智能合約(smart contract),並確保交易各方共享資訊與狀態。IBM並在倫敦、紐約、新加坡及東京開設IBM Garages設計開發中心,由IBM的專家協助企業設計、開發、執行區塊鏈應用,而其全球業務服務部門也針對銀行、金融服務及運籌業客戶提供區塊鏈顧問服務。
此外,Linux基金會去年宣佈Hyperledger專案。分類帳是區塊鏈的應用之一,可用於交易與追蹤有價值的東西。身為創始會員的IBM也於今天宣佈將貢獻將近4.4萬行程式碼以協助開發人員打造分散式分類帳。
IBM表示,目前已經有倫敦證券交易集團( London Stock Exchange Group)及芬蘭公司Kouvola Innovation與其合作中。
資料來源:http://www.ithome.com.tw/news/103922