neuralprobabilisticlanguagemodellingandfollowitwithtwonovelmodelsthat,speci callydesignedformethodnaming,re netheunderlyingneuralmodel:ourlogbilinearcontextmodel,whichaddscontextandfeatures,andsubtokencontextmodel,whichaddssubtokensandcanbeusedtogenerateneologisms.
Languagemodels(LM)areprobabilitydistributionsoverstringsofalanguage.Thesemodelsassumethatwearetryingtopredictatokentgivenasequenceofothertokensc=(cwecallthecontext.LMsareverygeneral;forexample,0,c1,...ciftheN)thatgoalistosequentiallypredicteverytokenina le,asan-grammodeldoes,thenwecantaket=ymandc=(ym n+1ym n+2...ym 1).Alternately,forthemethodnamingproblem,wecantakettobetheidenti ertokeninthedeclarationthatnamesthefunction,andctobeasequencethatcontainsallidenti ersinthefunctionbody.Obvi-ously,wecannotstoreaprobabilityvalueforeverypossiblecontext,sowemustmakesimplifyingassumptionstomakethemodelingtractable.DifferentLMsmakedifferentsimplifyingassumptions.
2.1Background
Tobuildintuition,webeginbyreviewingthen-gramLM,which
isastandardtechniqueinNLPandspeechprocessing,andwhichhasbecomeincreasinglypopularinsoftwareengineering[25,41,2].Then-grammodelassumesthatalloftheinformationrequiredtopredictthenexttokeniscontainedwithinthepreviousn 1tokensi.e.P(y1...yM)=∏Mm=1P(ym|ym 1...yVnmnumbers, n+1).Tospecifythismodelweneed(inprinciple)atableofwhereVisthenumberofpossiblelexemes,thatspeci estheconditionalprobabilitiesforeachpossiblen-gram.Thesearetheparametersofthemodelthatwelearnfromdata.
Thereisalargeliteratureonmethodsfortrainingthesemod-els[16],whichbasicallyrevolvearoundcountingtheproportionoftimesthattokenymfollowsym 1...ym n+1.However,evenwhenn=4orn=5,wecannotexpecttoestimatethecountsofalln-gramsreliably,asthenumberofpossiblen-gramsisexponentialinn.Therefore,smoothingmethodsareemployed,whichgenerallymodifythecountofararen-gramythecountofashortersuf xy1...ywhosentomakeitmoresimilarto2...yn,frequencywecanestimatemorereliably.Thisprocedureinvolvestheimplicitassump-tionthattwocontextsaremostsimilariftheysharealongsuf x.Butthisassumptiondoesnotalwayshold.Manysimilarcontexts,suchasx+yversusx+z,mightbetreatedverydifferentlybyan-grammodel,becausethe naltokenisdifferent.
LogbilinearmodelsNeuralLMs[10]addressthechallengethatthesimplen-grammodelhasbymakingsimilarpredictionsforsimilarcontexts.Theypredictthenexttokenyprevioustokensasinput.musinganeuralnetworkthattakestheThisallowsthenetworkto exiblylearnwhichtokens,likeint,providemuchinformationabouttheimmediatelyfollowingtoken,andwhichtokens,likethesemicolon’;’,provideverylittle.Unlikeann-grammodel,aneuralLMmakesiteasytoaddgenerallong-distancefeaturesofthecontextintotheprediction—wesimplyaddthemasadditionalinputstotheneuralnet.Inourwork,wefocusonasimpletypeofneuralLMthathasbeeneffectiveinpractice,namely,thelog-bilinearLM[37](LBL).Westartwithageneraltreatmentofloglinearmodelsconsideringmodelsoftheform
P(t|c)=
exp(sθ(t,c))
∑exp(s,c))
.
(1)
t θ(tIntuitively,sθisafunctionthatindicateshowmuchthemodellikes
toseebothtandctogether,theexpfunctionmapsthistobealwayspositive,andthedenominatorensuresthattheresultisaprobabilitydistribution.Thischoiceisverygeneral.Forexample,ifsfeaturesinc,thenthediscriminativemodelθisalinearfunctionoftheissimplyalogisticregression.
Logbilinearmodelslearnamapfromeverypossibletargetttoa
vectorqt∈RD,andfromeachcontextctoavector r
interprettheseaslocationsofeachcontextandeachtargetc∈RD.WelexemeinaDdimensionalspace;theselocationsarecalledembeddings.Themodelpredictsthatthetokentismorelikelytoappearincontextc
iftheembeddingqtofthetokenissimilartothat r
Toencodethisinthemodel,wechoose
cofthecontext.sθ(t,c)= r cqt+bt,
(2)
wherebtisascalarbiaswhichrepresentshowcommonlytoccursregardlessofthecontext.Tounderstandthisequationintuitively,
notethat,ifthevectors r
thecosineofthecandqanglethavenorm1,thentheirdotproductissimplybetweenthem.Sosvectorhasalargenorm,ifbθ,andhencep(t|c),islargerifeither r
iftislarge,orifcandqthaveasmallanglebetweenthem,thatis,theyaremoresimilaraccordingtothecommonlyusedcosinesimilaritymetric.Tocompletethisdescription,wede nethemapst→qc→r
.Forthetargetst,themostcommonchoiceistosimplytand includecthevectorqtforeverytasaparameterofthemodel.Thatis,thetrainingprocedurehasthefreedomtolearnanarbitrarymapbetweentandqt.Forthecontextsc,thischoiceisnotpossible,astherearetoomanypossiblecontexts.Instead,acommonchoice
[31,39]istorepresenttheembedding r
embeddingsofthetokenswithinit,thatcofacontextasthesumofis,
|C|
r
c=Ctrct
,
(3)
t∑=1
wherercDt∈Risavectorforeachlexemethatisincludedinthemodelparameters.Thevariabletindexeseverytokeninthecontextc,soifthesamelexemeoccursmultipletimesinc,thenitappearsmultipletimesinthesum.ThematrixCdependingontisadiagonalmatrixthatservesasascalingfactorthepositionofalexemewithinthecontext.Thisallows,forexample,alexeme’sin uenceonc’spositiontodependonhowcloseitistothetarget.TheDnon-zerovaluesinClexemetforeachtarealsoincludedinthemodelparameters.Eachvhastwoembeddings:anembeddingqvforwhenitisusedasatargetandanembeddingrvforwhenitappearsinthecontext.
Tosummarize,logbilinearmodelsmaketheassumptionthateverytokenandeverycontextcanbemappedinaD-dimensionalspace.Therearetwokindsofembeddingvectors:thosedirectlylearned(i.e.theparametersofthemodel)andthosecomputedfromtheparametersofthemodel.Toindicatethisdistinction,weplacea
haton r
ctoindicatethatitiscomputedfromthemodelparameters,whereaswewriteqtwithoutahattoindicatethatitisaparametervectorthatislearneddirectlybythetrainingprocedure.Thesemod-elscanalsobeviewedasathree-layerneuralnetwork,inwhichtheinputlayerencodesallofthelexemesincusinga1-of-Vencoding,thehiddenlayeroutputsthevectorsrctforeachtokeninthecontext,andtheoutputlayercomputesthescorefunctionssasoftmaxnonlinearity.Fordetailsontheθ(t,c)andpassesthemtoneuralnetworkrepresentation,seeBengioetal.[10].
Tolearntheseparameters,ithasrecentlybeenshown[39,38]thatanalternativetothemaximumlikelihoodmethodcallednoisecontrastiveestimation(NCE)[21]iseffective.NCEmeasureshowwellthemodelp(t|c)candistinguishtherealdatainthetrainingsetfrom“fantasydata”thatisgeneratedfromasimplenoisedistribution.Atahighlevel,thiscanbeviewedasablackboxalternativetomaximumlikelihoodthatmeasureshowwellthemodel tsthetrainingdata.Weoptimizethemodelparametersusingstochasticgradientdescent.WeemployNCEforallmodelsinthispaper.
2.2LogbilinearContextModelsofCode
Nowwepresentanewneuralnetwork,anovelLBLLMforcode,whichwecallalogbilinearcontextmodel.Thekeyidea
百度搜索“77cn”或“免费范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,免费范文网,提供经典小说综合文库2015-FSE-Suggesting accurate method and class names(3)在线全文阅读。
相关推荐: