In this thesis , the following issues of the eliank - general - report - system are researched : the ebnf ( extended backus - naur form ) definition of this system , lexical analysis , syntax analysis , the semantic analysis of assignment - statements and auditing - statements and balance - rules , the calculation of assignment - statements and auditing - statements , keeping the report balance and error - equilibrium after it is rounded off 本论文重点研究了此系统中的以下几个问题: ebnf范式的定义;词法分析、语法分析;赋值语句、审核语句、平衡规则的语义分析;赋值语句和审核语句的计算;报表舍位平衡与误差均衡化。
In the compiled system , the paper describes symbolic structure of the siprom language word with regular expression . based on the fa theory , this paper presents a lexical analysis method by using morphology table . after comparing some common syntax analysis methods , this paper choosing the lr ( 1 ) analyzer 在编译部分,本文用正规表达式描述了siprom语言单词符号的结构,利用有限自动机原理,提出了用构造词形表的方式来识别单词的方法,使词法分析程序具有很好的开放性。
The latter is paid more attention , which automatically acquires lexical context - specific expansions from the web , making full use of the context and high lever natural language processing technology , such as syntax analysis . this method includes two main stages : candidate expansion extraction and expansion validation , both of which mine the web using a search engine . by means of the two stages , we get very high expansion precise thus making the user profile constructed more plentiful and precise 该方法以互联网作为信息来源,充分考虑了待扩展词所在的上下文,并融合了句法分析等高层自然语言处理技术,通过扩展和确认两个主要步骤,逐步求精,使扩展的精度很高,从而在只有少量需求描述的情况下,使初始化构造的用户模板更加准确和丰富。
As ' to the front end , this thesis does n ' t use the method of syntax directed semantic analysis . instead , it arranges syntax analysis and semantic analysis into different stages in virtue of the equal description of source program . as to the back end , it selects the scheme that a virtual machine executes the intermediate code through interpreting 在对语言编译器即语言实现方面的研究中,论文总结了编译器的一般模型,并针对其前端和后端分别选择了合适的实现方案:前端没有采用语法制导的语义分析,而是通过对源程序等价表述的引入将语法分析和语义分析安排到不同阶段进行;后端选择了用虚拟机对中间代码解释执行的实现方案。
The latter is the emphasis of this article . its hard core is the accidence analysis builder grammar _ lexer . mll and the syntax analysis builder grammarjpaser . mly . it transforms the fortran character string in the program , which would be processed automatic differentiation transform , into data stream of diversified syntax units 词法分析和语法分析部分又分为输入命令数据流以及fortran源程序数据流的词法分析和语法分析,后一部分是本文分析的重点,它以词法分析程序生成器grammarlexer . mll和语法分析程序生成器grammarpaser . mly为核心,将进行自动微分变换的fortran源代码字符串变换成各类语法单位的数据流。