<video id="5zzzt"><i id="5zzzt"><listing id="5zzzt"></listing></i></video>

    <sub id="5zzzt"></sub>
<thead id="5zzzt"><var id="5zzzt"><ins id="5zzzt"></ins></var></thead>

<address id="5zzzt"><listing id="5zzzt"><mark id="5zzzt"></mark></listing></address>

<thead id="5zzzt"><var id="5zzzt"><ins id="5zzzt"></ins></var></thead>

      您當前所在位置: 首頁 > 講座報告 > 正文
      講座報告

      Differentially Private Distributed Machine Learning

      來源:廣州研究院          點擊:
      報告人 潘淼 副教授 時間 12月16日9:00
      地點 騰訊會議直播 報告時間

      講座名稱:Differentially Private Distributed Machine Learning

      講座人:潘淼 副教授

      講座時間:12月16日9:00

      講座地點:騰訊會議直播(會議ID:348908387)

       

       

      座人介紹:

      潘淼,休斯敦大學電子與計算機工程系副教授,曾獲得2014年NSF CAREER Award。2012年8月獲得佛羅里達大學電氣與計算機工程博士學位。研究方向包括網絡空間安全、深度學習隱私、大數據隱私、水下無線通信與網絡、認知無線電網絡等。在著名期刊和會議上發表論文兩百余篇,其中包括IEEE/ACM Transactions on Networking、IEEE Journal on Selected Areas in Communications、IEEE Transactions on Mobile Computing和IEEE INFOCOM等。

       

       

      講座內容:

      Nowadays, the development of machine learning shows great potential in a variety of fields, such as retail, advertisement, manufacturing, healthcare, and insurance. Although machine learning has infiltrated into many areas due to its advantages, a vast amount of data has been generated at an ever-increasing rate, which leads to significant computational complexity for data collection and processing via a centralized machine learning approach. Distributed machine learning thus has received huge interests due to its capability of exploiting the collective computing power of edge devices. However, during the learning process, model updates using local private samples and large-scale parameter exchanges among agents impose severe privacy concerns and communication burdens. To address those challenges, we will present three recent works integrating differential privacy (DP) with Alternating Direction Method of Multipliers (ADMM) and Decentralized gradient descent, two promising optimization methods to achieve distributed machine learning. First, we propose a differentially private robust ADMM algorithm by adding Gaussian noise with decaying variance to perturb exchanged variables at each iteration, where two kinds of noise variance decay schemes are proposed to reduce the negative effects of noise addition and maintain the convergence behaviors. Second, in order to release the shackles of the exact optimal solution during each ADMM iteration to ensure DP, we consider outputting a noisy approximate solution for the perturbed objective and further adopting sparse vector technique to determine if an agent should update its neighbors with the current perturbed solution to avoid the redundant privacy loss accumulation and reduce the communication cost. Third, we develop a differentially private and communication efficient decentralized gradient descent method which will update the local models by integrating DP noise and random quantization operator to simultaneously enforce DP and communication efficiency.

       

       

      主辦單位:廣州研究院

      123

      南校區地址:陜西省西安市西灃路興隆段266號

      郵編:710126

      北校區地址:陜西省西安市太白南路2號

      郵編:710071

      電話:029-88201000

      訪問量:

      版權所有:西安電子科技大學     陜ICP備05016463號     建設與運維:信息網絡技術中心 

      众乐游棋牌