77范文网 - 专业文章范例文档资料分享平台

2015-FSE-Suggesting accurate method and class names(3)

来源:网络收集 时间:2020-12-24 下载这篇文档 手机版
说明:文章内容仅供预览,部分内容可能不全,需要完整文档或者需要复制内容,请下载word后使用。下载word有问题请添加微信号:或QQ: 处理(尽可能给您提供完整文档),感谢您的支持与谅解。点击这里给我发消息

neuralprobabilisticlanguagemodellingandfollowitwithtwonovelmodelsthat,speci callydesignedformethodnaming,re netheunderlyingneuralmodel:ourlogbilinearcontextmodel,whichaddscontextandfeatures,andsubtokencontextmodel,whichaddssubtokensandcanbeusedtogenerateneologisms.

Languagemodels(LM)areprobabilitydistributionsoverstringsofalanguage.Thesemodelsassumethatwearetryingtopredictatokentgivenasequenceofothertokensc=(cwecallthecontext.LMsareverygeneral;forexample,0,c1,...ciftheN)thatgoalistosequentiallypredicteverytokenina le,asan-grammodeldoes,thenwecantaket=ymandc=(ym n+1ym n+2...ym 1).Alternately,forthemethodnamingproblem,wecantakettobetheidenti ertokeninthedeclarationthatnamesthefunction,andctobeasequencethatcontainsallidenti ersinthefunctionbody.Obvi-ously,wecannotstoreaprobabilityvalueforeverypossiblecontext,sowemustmakesimplifyingassumptionstomakethemodelingtractable.DifferentLMsmakedifferentsimplifyingassumptions.

2.1Background

Tobuildintuition,webeginbyreviewingthen-gramLM,which

isastandardtechniqueinNLPandspeechprocessing,andwhichhasbecomeincreasinglypopularinsoftwareengineering[25,41,2].Then-grammodelassumesthatalloftheinformationrequiredtopredictthenexttokeniscontainedwithinthepreviousn 1tokensi.e.P(y1...yM)=∏Mm=1P(ym|ym 1...yVnmnumbers, n+1).Tospecifythismodelweneed(inprinciple)atableofwhereVisthenumberofpossiblelexemes,thatspeci estheconditionalprobabilitiesforeachpossiblen-gram.Thesearetheparametersofthemodelthatwelearnfromdata.

Thereisalargeliteratureonmethodsfortrainingthesemod-els[16],whichbasicallyrevolvearoundcountingtheproportionoftimesthattokenymfollowsym 1...ym n+1.However,evenwhenn=4orn=5,wecannotexpecttoestimatethecountsofalln-gramsreliably,asthenumberofpossiblen-gramsisexponentialinn.Therefore,smoothingmethodsareemployed,whichgenerallymodifythecountofararen-gramythecountofashortersuf xy1...ywhosentomakeitmoresimilarto2...yn,frequencywecanestimatemorereliably.Thisprocedureinvolvestheimplicitassump-tionthattwocontextsaremostsimilariftheysharealongsuf x.Butthisassumptiondoesnotalwayshold.Manysimilarcontexts,suchasx+yversusx+z,mightbetreatedverydifferentlybyan-grammodel,becausethe naltokenisdifferent.

LogbilinearmodelsNeuralLMs[10]addressthechallengethatthesimplen-grammodelhasbymakingsimilarpredictionsforsimilarcontexts.Theypredictthenexttokenyprevioustokensasinput.musinganeuralnetworkthattakestheThisallowsthenetworkto exiblylearnwhichtokens,likeint,providemuchinformationabouttheimmediatelyfollowingtoken,andwhichtokens,likethesemicolon’;’,provideverylittle.Unlikeann-grammodel,aneuralLMmakesiteasytoaddgenerallong-distancefeaturesofthecontextintotheprediction—wesimplyaddthemasadditionalinputstotheneuralnet.Inourwork,wefocusonasimpletypeofneuralLMthathasbeeneffectiveinpractice,namely,thelog-bilinearLM[37](LBL).Westartwithageneraltreatmentofloglinearmodelsconsideringmodelsoftheform

P(t|c)=

exp(sθ(t,c))

∑exp(s,c))

.

(1)

t θ(tIntuitively,sθisafunctionthatindicateshowmuchthemodellikes

toseebothtandctogether,theexpfunctionmapsthistobealwayspositive,andthedenominatorensuresthattheresultisaprobabilitydistribution.Thischoiceisverygeneral.Forexample,ifsfeaturesinc,thenthediscriminativemodelθisalinearfunctionoftheissimplyalogisticregression.

Logbilinearmodelslearnamapfromeverypossibletargetttoa

vectorqt∈RD,andfromeachcontextctoavector r

interprettheseaslocationsofeachcontextandeachtargetc∈RD.WelexemeinaDdimensionalspace;theselocationsarecalledembeddings.Themodelpredictsthatthetokentismorelikelytoappearincontextc

iftheembeddingqtofthetokenissimilartothat r

Toencodethisinthemodel,wechoose

cofthecontext.sθ(t,c)= r cqt+bt,

(2)

wherebtisascalarbiaswhichrepresentshowcommonlytoccursregardlessofthecontext.Tounderstandthisequationintuitively,

notethat,ifthevectors r

thecosineofthecandqanglethavenorm1,thentheirdotproductissimplybetweenthem.Sosvectorhasalargenorm,ifbθ,andhencep(t|c),islargerifeither r

iftislarge,orifcandqthaveasmallanglebetweenthem,thatis,theyaremoresimilaraccordingtothecommonlyusedcosinesimilaritymetric.Tocompletethisdescription,wede nethemapst→qc→r

.Forthetargetst,themostcommonchoiceistosimplytand includecthevectorqtforeverytasaparameterofthemodel.Thatis,thetrainingprocedurehasthefreedomtolearnanarbitrarymapbetweentandqt.Forthecontextsc,thischoiceisnotpossible,astherearetoomanypossiblecontexts.Instead,acommonchoice

[31,39]istorepresenttheembedding r

embeddingsofthetokenswithinit,thatcofacontextasthesumofis,

|C|

r

c=Ctrct

,

(3)

t∑=1

wherercDt∈Risavectorforeachlexemethatisincludedinthemodelparameters.Thevariabletindexeseverytokeninthecontextc,soifthesamelexemeoccursmultipletimesinc,thenitappearsmultipletimesinthesum.ThematrixCdependingontisadiagonalmatrixthatservesasascalingfactorthepositionofalexemewithinthecontext.Thisallows,forexample,alexeme’sin uenceonc’spositiontodependonhowcloseitistothetarget.TheDnon-zerovaluesinClexemetforeachtarealsoincludedinthemodelparameters.Eachvhastwoembeddings:anembeddingqvforwhenitisusedasatargetandanembeddingrvforwhenitappearsinthecontext.

Tosummarize,logbilinearmodelsmaketheassumptionthateverytokenandeverycontextcanbemappedinaD-dimensionalspace.Therearetwokindsofembeddingvectors:thosedirectlylearned(i.e.theparametersofthemodel)andthosecomputedfromtheparametersofthemodel.Toindicatethisdistinction,weplacea

haton r

ctoindicatethatitiscomputedfromthemodelparameters,whereaswewriteqtwithoutahattoindicatethatitisaparametervectorthatislearneddirectlybythetrainingprocedure.Thesemod-elscanalsobeviewedasathree-layerneuralnetwork,inwhichtheinputlayerencodesallofthelexemesincusinga1-of-Vencoding,thehiddenlayeroutputsthevectorsrctforeachtokeninthecontext,andtheoutputlayercomputesthescorefunctionssasoftmaxnonlinearity.Fordetailsontheθ(t,c)andpassesthemtoneuralnetworkrepresentation,seeBengioetal.[10].

Tolearntheseparameters,ithasrecentlybeenshown[39,38]thatanalternativetothemaximumlikelihoodmethodcallednoisecontrastiveestimation(NCE)[21]iseffective.NCEmeasureshowwellthemodelp(t|c)candistinguishtherealdatainthetrainingsetfrom“fantasydata”thatisgeneratedfromasimplenoisedistribution.Atahighlevel,thiscanbeviewedasablackboxalternativetomaximumlikelihoodthatmeasureshowwellthemodel tsthetrainingdata.Weoptimizethemodelparametersusingstochasticgradientdescent.WeemployNCEforallmodelsinthispaper.

2.2LogbilinearContextModelsofCode

Nowwepresentanewneuralnetwork,anovelLBLLMforcode,whichwecallalogbilinearcontextmodel.Thekeyidea

百度搜索“77cn”或“免费范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,免费范文网,提供经典小说综合文库2015-FSE-Suggesting accurate method and class names(3)在线全文阅读。

2015-FSE-Suggesting accurate method and class names(3).doc 将本文的Word文档下载到电脑,方便复制、编辑、收藏和打印 下载失败或者文档不完整,请联系客服人员解决!
本文链接:https://www.77cn.com.cn/wenku/zonghe/1169913.html(转载请注明文章来源)
Copyright © 2008-2022 免费范文网 版权所有
声明 :本网站尊重并保护知识产权,根据《信息网络传播权保护条例》,如果我们转载的作品侵犯了您的权利,请在一个月内通知我们,我们会及时删除。
客服QQ: 邮箱:tiandhx2@hotmail.com
苏ICP备16052595号-18
× 注册会员免费下载(下载后可以自由复制和排版)
注册会员下载
全站内容免费自由复制
注册会员下载
全站内容免费自由复制
注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信: QQ: