ChipMaster's bwBASIC This also includes history going back to v2.10. *WARN* some binary files might have been corrupted by CRLF.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 

4539 lines
168 KiB

  1. Google
  2. This is a digital copy of a book that was preserved for generations on library shelves before it was carefully scanned by Google as part of a project
  3. to make the world's books discoverable online.
  4. It has survived long enough for the copyright to expire and the book to enter the public domain. A public domain book is one that was never subject
  5. to copyright or whose legal copyright term has expired. Whether a book is in the public domain may vary country to country. Public domain books
  6. are our gateways to the past, representing a wealth of history, culture and knowledge that's often difficult to discover.
  7. Marks, notations and other marginalia present in the original volume will appear in this file - a reminder of this book's long journey from the
  8. publisher to a library and finally to you.
  9. Usage guidelines
  10. Google is proud to partner with libraries to digitize public domain materials and make them widely accessible. Public domain books belong to the
  11. public and we are merely their custodians. Nevertheless, this work is expensive, so in order to keep providing this resource, we have taken steps to
  12. prevent abuse by commercial parties, including placing technical restrictions on automated querying.
  13. We also ask that you:
  14. + Make non-commercial use of the files We designed Google Book Search for use by individuals, and we request that you use these files for
  15. personal, non-commercial purposes.
  16. + Refrain from automated querying Do not send automated queries of any sort to Google's system: If you are conducting research on machine
  17. translation, optical character recognition or other areas where access to a large amount of text is helpful, please contact us. We encourage the
  18. use of public domain materials for these purposes and may be able to help.
  19. + Maintain attribution The Google "watermark" you see on each file is essential for informing people about this project and helping them find
  20. additional materials through Google Book Search. Please do not remove it.
  21. + Keep it legal Whatever your use, remember that you are responsible for ensuring that what you are doing is legal. Do not assume that just
  22. because we believe a book is in the public domain for users in the United States, that the work is also in the public domain for users in other
  23. countries. Whether a book is still in copyright varies from country to country, and we can't offer guidance on whether any specific use of
  24. any specific book is allowed. Please do not assume that a book's appearance in Google Book Search means it can be used in any manner
  25. anywhere in the world. Copyright infringement liability can be quite severe.
  26. About Google Book Search
  27. Google's mission is to organize the world's information and to make it universally accessible and useful. Google Book Search helps readers
  28. discover the world's books while helping authors and publishers reach new audiences. You can search through the full text of this book on the web
  29. at http://books.google.com/
  30. 13.10: 500-70/1
  31. of Commerce Computer Science
  32. National Bureau and Technology
  33. of Standards
  34. NBS Special Publication 500-70/1
  35. NBS Minimal BASIC Test
  36. Programs—Version 2,
  37. User's Manual
  38. Volume 1—Documentation
  39. NOTRE DAME
  40. 247
  41. NOV 25 1980
  42. 15183
  43. DOCUMENTS CENTER
  44. DEPOSITORY
  45. UNIVERSITY OF MICHIGAN
  46. 3 9015 07758 8021
  47. NATIONAL BUREAU OF STANDARDS
  48. The National Bureau of Standards' was established by an act of Congress on March 3, 1901.
  49. The Bureau's overall goal is to strengthen and advance the Nation's science and technology
  50. and facilitate their effective application for public benefit. To this end, the Bureau conducts
  51. research and provides: (I) a basis for the Nation's physical measurement system, (2) scientific
  52. and technological services for industry and government, (3) a technical basis for equity in
  53. trade, and (4) technical services to promote public safety. The Bureau's technical work is per-
  54. formed by the National Measurement Laboratory, the National Engineering Laboratory, and
  55. the Institute for Computer Sciences and Technology.
  56. THE NATIONAL MEASUREMENT LABORATORY provides the national system of
  57. physical and chemical and materials measurement; coordinates the system with measurement
  58. systems of other nations and furnishes essential services leading to accurate and uniform
  59. physical and chemical measurement throughout the Nation's scientific community, industry,
  60. and commerce; conducts materials research leading to improved methods of measurement,
  61. standards, and data on the properties of materials needed by industry, commerce, educational
  62. institutions, and Government; provides advisory and research services to other Government
  63. agencies; develops, produces, and distributes Standard Reference Materials; and provides
  64. calibration services. The Laboratory consists of the following centers:
  65. Absolute Physical Quantities' — Radiation Research — Thermodynamics and
  66. Molecular Science — Analytical Chemistry — Materials Science.
  67. THE NATIONAL ENGINEERING LABORATORY provides technology and technical ser-
  68. vices to the public and private sectors to address national needs and to solve national
  69. problems; conducts research in engineering and applied science in support of these efforts;
  70. builds and maintains competence in the necessary disciplines required to carry out this
  71. research and technical service; develops engineering data and measurement capabilities;
  72. provides engineering measurement traceability services; develops test methods and proposes
  73. engineering standards and code changes; develops and proposes new engineering practices;
  74. and develops and improves mechanisms to transfer results of its research to the ultimate user.
  75. The Laboratory consists of the lollowing centers:
  76. Applied Mathematics — Electronics and Electrical Engineering' — Mechanical
  77. Engineering and Process Technology= — Building Technology — Fire Research —
  78. Consumer Product Technology — Field Methods.
  79. THE INSTITUTE FOR COMPUTER SCIENCES AND TECHNOLOGY conducts
  80. research and provides scientific and technical services to aid Federal agencies in the selection,
  81. acquisition, application, and use of computer technology to improve effectiveness and
  82. economy in Government operations in accordance with Public Law 89-306 (40 U.S.C. 759),
  83. relevant Executive Orders, and other directives; carries out this mission by managing the
  84. Federal Information Processing Standards Program, developing Federal ADP standards
  85. guidelines, and managing Federal participation in ADP voluntary standardization activities;
  86. provides scientific and technological advisory services and assistance to Federal agencies; and
  87. provides the technical foundation for computer-related policies of the Federal Government.
  88. The Institute consists of the following centers:
  89. Programming Science and Technology — Computer Systems Engineering.
  90. 'Headquarters and Laboratories at Gaithersburg, MD, unless otherwise noted;
  91. mailing address Washington. DC 20234.
  92. Some divisions within the center are located at Boulder, CO 80303.
  93. Computer Science
  94. and Technology
  95. NBS Special Publication 500-70/1
  96. NBS Minimal BASIC Test
  97. Programs—Version 2,
  98. User's Manual
  99. Volume 1—Documentation
  100. John V. Cugini
  101. Joan S. Bowden
  102. Mark W. Skall
  103. Center for Programming Science and Technology
  104. Institute for Computer Sciences and Technology
  105. National Bureau of Standards
  106. Washington, DC 20234
  107. U.S. DEPARTMENT OF COMMERCE
  108. Philip M. Klutznick, Secretary
  109. Luther H. Hodges, Jr., Deputy Secretary
  110. Jordan J. Baruch, Assistant Secretary for Productivity,
  111. Technology and Innovation
  112. National Bureau of Standards
  113. Ernest Ambler, Director
  114. Issued November 1980
  115. Reports on Computer Science and Technology
  116. The National Bureau of Standards has a special responsibility within the Federal
  117. Government for computer science and technology activities. The programs of the
  118. NBS Institute for Computer Sciences and Technology are designed to provide ADP
  119. standards, guidelines, and technical advisory services to improve the effectiveness
  120. of computer utilization in the Federal sector, and to perform appropriate research and
  121. development efforts as foundation for such activities and programs. This publication
  122. series will report these NBS efforts to the Federal computer community as well as to
  123. interested specialists in the academic and private sectors. Those wishing to receive
  124. notices of publications in this series should complete and return the form at the end
  125. of this publication.
  126. National Bureau of Standards Special Publication 500-70/1
  127. Nat. Bur. Stand. (U.S.), Spec. Publ. 500-70/1, 79 pages (Nov. 1980)
  128. CODEN: XNBSAV
  129. Library of Congress Catalog Card Number: 80-600163
  130. U.S. GOVERNMENT PRINTING OFFICE
  131. WASHINGTON: 1980
  132. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402
  133. Price $4.00
  134. (Add 25 percent for other than U.S. mailing)
  135. NBS Minimal BASIC Test Programs - Version 2
  136. User's Manual
  137. Volume 1 - Documentation
  138. John V. Cugini
  139. Joan S. Bowden
  140. Mark W. Skall
  141. Abstract: This publication describes the set of programs
  142. developed by NBS for the purpose of testing conformance of
  143. implementations of the computer language BASIC to the American
  144. National Standard for Minimal BASIC, ANSI X3.60-1978. The
  145. Department of Commerce has adopted this ANSI standard as Federal
  146. Information Processing Standard 68. By submitting the programs
  147. to a candidate implementation, the user can test the various
  148. features which an implementation must support in order to conform
  149. to the standard. While some programs can determine whether or
  150. not a given feature is correctly implemented, others produce
  151. output which the user must then interpret to some degree. This
  152. manual describes how the programs should be used so as to
  153. interpret correctly the results of the tests. Such
  154. interpretation depends strongly on a solid understanding of the
  155. conformance rules laid down in the standard, and there is a brief
  156. discussion of these rules and how they relate to the test
  157. programs and to the various ways in which the language may be
  158. implemented.
  159. Key words: BASIC; language processor testing; Minimal BASIC;
  160. programming language standards; software standards; software
  161. testing
  162. Acknowledgments: Version 2 owes its existence to the efforts and
  163. example of many people. Dr. David Gilsinn and Mr. Charles
  164. Sheppard, the authors of version 1*, deserve credit for
  165. construction of that first system, of which version 2 is a
  166. refinement. In addition, they were generous in their advice on
  167. many of the pitfalls to avoid on the second iteration. Mr.
  168. Landon Dyer assisted with the testing and document preparation.
  169. It is also important to thank the many people who sent in
  170. comments and suggestions on Version 1. We hope that all the
  171. users of the resulting Version 2 will help us improve it further.
  172. * issued as an NBS Internal Report; no longer available.
  173. Page 2
  174. Table of Contents
  175. Section Page
  176. 1 How to Use This Manual 6
  177. 2 The Language Standard for BASIC 7
  178. 2.1 History and Prospects 7
  179. 2.2 The Minimal BASIC Language 8
  180. 2.3 Conformance to the Standard 9
  181. 2.3.1 Program conformance 9
  182. 2.3.2 Implementation conformance 10
  183. 3 Determining Implementation Conformance 11
  184. 3.1 Test programs as test data, not algorithms 11
  185. 3.2 Special Issues Raised by the Standard Requirements 12
  186. 3.2.1 Implementation—defined features 12
  187. 3.2.2 Error and Exception Reporting 12
  188. 4 Structure of the Test System 15
  189. 4.1 Testing Features Before Using Them 15
  190. 4.2 Hierarchical Organization of the Tests 16
  191. 4.3 Environment Assumptions 16
  192. 4.4 Operating and Interpreting the Tests 17
  193. 4.4.1 User Checking vs. Self Checking 17
  194. 4.4.2 Types of Tests 18
  195. 4.4.2.1 Standard Tests 18
  196. 4.4.2.2 Exception Tests 18
  197. 4.4.2.3 Error Tests 19
  198. 4.4.2.4 Informative Tests 22
  199. 4.4.3 Documentation 23
  200. Page 3
  201. Section Page
  202. 5 Functional Groups of Test Programs 26
  203. 5.1 Simple PRINTing of string constants 26
  204. 5.2 END and STOP 26
  205. 5.3 PRINTing and simple assignment (LET) 27
  206. 5.3.1 String variables and TAB 27
  207. 5.3.2 Numeric constants and variables 28
  208. 5.4 Control Statements and REM 28
  209. 5.5 Variables 29
  210. 5.6 Numeric Constants, Variables, and Operations 29
  211. 5.6.1 Standard Capabilities 29
  212. 5.6.2 Exceptions 30
  213. 5.6.3 Errors 31
  214. 5.6.4 Accuracy tests - Informative 31
  215. 5.7 FOR-NEXT 33
  216. 5.8 Arrays 34
  217. 5.8.1 Standard Capabilities 34
  218. 5.8.2 Exceptions 34
  219. 5.8.3 Errors 34
  220. 5.9 Control Statements 35
  221. 5.9.1 GOSUB and RETURN 35
  222. 5.9.2 ON-GOTO 36
  223. 5.10 READ, DATA, and RESTORE 36
  224. 5.10.1 Standard Capabilities 36
  225. 5.10.2 Exceptions 36
  226. 5.10.3 Errors 36
  227. Page 4
  228. Section Page
  229. 5.11 INPUT 37
  230. 5.11.1 Standard Capabilities 37
  231. 5.11.2 Exceptions 38
  232. 5.11.3 Errors 40
  233. 5.12 Implementation-supplied Functions 42
  234. 5.12.1 Precise functions: ABS,INT,SGN 42
  235. 5.12.2 Approximated functions:
  236. SQR,ATN,COS,EXP,LOG,SIN,TAN 42
  237. 5.12.3 RND and RANDOMIZE 43
  238. 5.12.3.1 Standard Capabilities 44
  239. 5.12.3.2 Informative Tests 44
  240. 5.12.4 Errors 45
  241. 5.13 User-defined Functions 45
  242. 5.13.1 Standard Capabilities 45
  243. 5.13.2 Errors 45
  244. 5.14 Numeric Expressions 46
  245. 5.14.1 Standard Capabilities in context of
  246. LET-statement 46
  247. 5.14.2 Expressions in other contexts: PRINT, IF,
  248. ON-GOTO, FOR 46
  249. 5.14.3 Exceptions in subscripts and arguments 47
  250. 5.14.4 Exceptions in other contexts: PRINT, IF,
  251. ON-GOTO, FOR 47
  252. Page 5
  253. Section Page
  254. 5.15 Miscellaneous Checks 47
  255. 5.15.1 Missing keyword 47
  256. 5.15.2 Spaces 48
  257. 5.15.3 Quotes 48
  258. 5.15.4 Line Numbers 48
  259. 5.15.5 Line longer than 72 characters 48
  260. 5.15.6 Margin Overflow for Output Line 49
  261. 5.15.7 Lowercase characters 49
  262. 5.15.8 Ordering Strings 49
  263. 5.15.9 Mismatch of Types in Assignment 49
  264. 6 Tables of Summary Information about the Test Programs 50
  265. 6.1 Group Structure of the Minimal BASIC Test Programs 51
  266. 6.2 Test Program Sequence 54
  267. 6.3 Cross-reference between ANSI Standard and Test
  268. Programs 71
  269. Appendix A: Differences between Versions 1 and 2 of the
  270. Minimal BASIC Test Programs 75
  271. References 76
  272. Figures:
  273. Figure 1 - Error and Exception Handling 14
  274. Figure 2 - Format of Test Program Output 25
  275. Figure 3 - Instructions for the INPUT Exceptions Test 41
  276. Page 6
  277. 1 HOW TO USE THIS MANUAL
  278. This manual presents background information and operating
  279. instructions for the NBS Minimal BASIC test programs. Readers
  280. who want a general idea of what the programs are supposed to do
  281. and why they are structured as they are should read sections 2
  282. and 3. These sections give a brief explanation of BASIC, how it
  283. is standardized, and how the test programs help measure
  284. conformance to the standard. Those who wish to know how to
  285. interpret the results of program execution should also read
  286. section 3 and then section 4 for the general rules of
  287. interpretation and section 5 for information peculiar to
  288. individual programs and groups of programs within the test
  289. system. Section 6 contains tables of summary information about
  290. the tests.
  291. Volume 2 of this publication consists of the source listings
  292. and sample outputs for all the test programs.
  293. The test system for BASIC should be helpful to anyone with
  294. an interest in measuring the conformance of an implementation of
  295. BASIC (e.g., a compiler or interpreter) to the Minimal BASIC
  296. standard. This would include 1) purchasers who want to be sure
  297. they are buying a standard implementation, 2) programmers who
  298. must use a given implementation and want to know in which areas
  299. it conforms to the standard and which features to avoid or be
  300. wary of, and 3) implementors who may wish to use the tests as a
  301. development and debugging tool.
  302. Much of this manual is derived from the technical
  303. specifications in the American National Standard for Minimal
  304. BASIC, ANSI X3.60-1978 [1]. You will need a copy of that
  305. standard in order to understand most of the material herein.
  306. Copies are available from the American National Standards
  307. Institute, 1430 Broadway, New York, NY 10018. This document will
  308. frequently cite ANSI X3.60-1978, and references to "the standard"
  309. should be taken to mean that ANSI publication.
  310. The measure of success for Version 2 of the Minimal BASIC
  311. Test Programs is its usefulness to you. We at NBS would greatly
  312. appreciate hearing about your evaluation of the test system. We
  313. will respond to requests for clarification concerning the system
  314. and its relation to the standard. Also, we will maintain a
  315. mailing list of users who request to be notified of changes and
  316. major clarifications. Please direct all comments, questions, and
  317. suggestions to:
  318. Project Manager
  319. NBS BASIC Test Programs
  320. National Bureau of Standards
  321. Technology Bldg., Room A-265
  322. Washington, DC 20234
  323. Page 7
  324. 2 THE LANGUAGE STANDARD FOR BASIC
  325. 2.1 History And Prospects
  326. BASIC is a computer programming language developed in the
  327. mid 1960's by Professors John G. Kemeny and Thomas E. Kurtz at
  328. Dartmouth College. The primary motivation behind its design was
  329. educational (in contrast to the design goals for, e.g. COBOL and
  330. FORTRAN) and accordingly the language has always emphasized ease
  331. of use and understanding as opposed to simple machine efficiency.
  332. In July 1973, NBS published a "Candidate Standard for Fundamental
  333. BASIC" [2] by Prof. John A. N. Lee of the University of
  334. Massachusetts at Amherst. This work represented the beginning of
  335. a serious effort to standardize BASIC. The first meeting of the
  336. American National Standards Technical Committee on the
  337. Programming Language BASIC, X3J2, convened at CBEMA headquarters
  338. in Washington DC, on January 23-24, 1974, with Professor Kurtz as
  339. chairman. The committee adopted a program of work which
  340. envisioned development of a nucleus language followed by
  341. modularized enhancements. The nucleus finally emerged as Minimal
  342. BASIC, which was approved as an ANSI standard January 17, 1978.
  343. As its name implies, the language defined in the standard is one
  344. which any implementation of BASIC should encompass.
  345. Meanwhile, NBS had been developing a set of test programs,
  346. the purpose of which was to exercise all the facilities defined
  347. in the standard and thereby test conformance of implementations
  348. to the standard. This test suite was released as NBSIR
  349. 78-1420-1, 2 3 and 4, NBS Minimal BASIC Test Programs - Version
  350. 1 User's Manual in January 1978. NBS distributed this version to
  351. more than 60 users, many of whom made suggestions about how the
  352. test suite might be improved. NBS has endeavored to incorporate
  353. these suggestions and re-design the tests where it seemed useful
  354. to do so. The result is the current Version 2 of the test suite.
  355. Appendix A contains a summary of the differences between versions
  356. 1 and 2.
  357. In order to provide a larger selection of high level
  358. programming languages for the Federal government's ADP
  359. activities, the Department of Commerce has incorporated the ANSI
  360. standard as Federal Information Processing Standard 68. This
  361. means, among other things, that implementations of BASIC sold to
  362. the Federal government after an 18 month transition period must
  363. conform to the technical specifications of the ANSI standard:
  364. hence the NBS interest in developing a tool for measuring such
  365. conformance.
  366. ANSI X3J2 is currently (April 1980) working on a language
  367. standard for a much richer version of BASIC, which will provide
  368. such features as real-time process control, graphics, string
  369. manipulation, file handling, exception handling, and array
  370. manipulation. The current expectation is for ANSI adoption of
  371. this standard sometime in 1982. It is probable that such a
  372. standard for a full version of BASIC would be adopted as a
  373. Federal Information Processing Standard.
  374. Page 8
  375. 2.2 The Minimal BASIC Language
  376. Minimal BASIC is distinguished among standardized computer
  377. languages by its simplicity and its suitability for the casual
  378. user. It is simple, not only because of its historic purpose as
  379. a computer language for the casual user, but also because the
  380. ANSI BASIC committee organized its work around the concept of
  381. first defining a core or nucleus language which one might
  382. reasonably expect any implementation of BASIC to include, to be
  383. followed by a .standard for enhanced versions of the language.
  384. Therefore the tendency was to defer standardization of all the
  385. sophisticated features and include only the simple features. In
  386. particular, Minimal BASIC has no facilities for file handling,
  387. string manipulation, or array manipulation and has only
  388. rudimentary control structures. Although BASIC was originally
  389. designed for interactive use, the standard does not restrict
  390. implementations to that use.
  391. Minimal BASIC provides for only two types of data, numeric
  392. (with the properties usually associated with real or
  393. floating-point numbers) and string. String data can be read as
  394. input, printed as output, and moved and compared internally. The
  395. only legal comparisons, however, are equal or not equal; no
  396. collating sequence among characters is defined. Numeric data can
  397. be manipulated with the usual choice of operations: addition,
  398. subtraction, multiplication, division, and involution (sometimes
  399. called exponentiation). There is a modest assortment of numeric
  400. functions. One- and two-dimensional arrays are allowed, but only
  401. for numeric data, not string.
  402. For control, there is a GOTO, an IF which can cause control
  403. to jump to any line in the program, a GOSUB and RETURN for
  404. internal subroutines, a FOR and NEXT statement to execute loops
  405. while incrementing a control-variable, and a STOP statement.
  406. Input and output are accomplished with the INPUT and PRINT
  407. statements, both of which are designed for use on an interactive
  408. terminal. There is also a feature which has no real equivalent
  409. among the popular computer languages: a kind of internal file of
  410. data values, numeric and string, which can be assigned to
  411. variables with a READ statement (not to be confused with INPUT
  412. which handles external data). The internal set of values is
  413. created with DATA statements, and may be read repetitively from
  414. the beginning of the set by using the RESTORE statement.
  415. The programmer may (but need not) declare the size of arrays
  416. with the DIM statement and specify that subscripts begin at 0 or
  417. 1 with an OPTION statement. The programmer may also establish
  418. numeric user-defined functions with the DEF statement, but only
  419. as simple expressions, taking one argument. The RANDOMIZE
  420. statement works in conjunction with the RND function. If RND is
  421. called without execution of RANDOMIZE, it always returns the same
  422. sequence of pseudo-random numbers for each execution of the
  423. program. Executing RANDOMIZE causes RND to return an
  424. unpredictable set of values each time.
  425. Page 9
  426. The REM statement allows the programmer to insert comments
  427. or remarks throughout the program.
  428. Although the facilities of the language are modest, there is
  429. one area in which the standard sets rather stringent
  430. requirements, namely, diagnostic messages. The mandate of the
  431. standard that implementations exhibit reasonable behavior even
  432. when presented with unreasonable programs follows directly from
  433. the design goal of solicitude towards the beginning or casual
  434. user. Thus, the standard takes care to define what happens if
  435. the user commits any of a number of syntactic or semantic
  436. blunders. The need to test these diagnostic requirements
  437. strongly affected the overall shape of the test system as will
  438. become apparent in later sections.
  439. 2.3 Conformance To The Standard
  440. There are many reasons for establishing a standard for a
  441. programming language: the promotion of well—defined and
  442. well—designed languages as a consequence of the standardizing
  443. process itself, the ability to create language—based rather than
  444. machine—based software tools and techniques, the increase in
  445. programmer productivity which an industry—wide standard fosters,
  446. and so on. At bottom, however, there is one result essential to
  447. the success of a standard: program portability. The same
  448. program should not evoke perniciously different behavior in
  449. different implementations. Ideally, the same source code and
  450. data environment should produce the same output, regardless of
  451. the machine environment.
  452. How does conformance to the standard work towards this goal?
  453. Essentially, the standard defines the set of syntactically legal
  454. programs, assigns a semantic meaning to all of them and then
  455. requires that implementations (sometimes called processors; we
  456. will use the two terms interchangeably throughout this document)
  457. actually produce the assigned meaning when presented with a legal
  458. program.
  459. 2.3.1 Program Conformance
  460. Program conformance, then, is a matter of syntax. We look
  461. at the source code and determine whether or not it obeys all the
  462. rules laid down in the standard. These rules are mostly spelled
  463. out in the syntax sections of the standard using a variant of
  464. Backus—Naur Form (BNF). They are supplemented, however, by
  465. certain context—sensitive constraints, which are contained in the
  466. semantics sections. Thus the rules form a ceiling for conforming
  467. programs. If a program has been constructed according to the
  468. rules, it meets the standard, without regard to its semantic
  469. meaning. The syntactic rules are fully implementation
  470. independent: a standard program must be accepted by all standard
  471. Page 10
  472. processors. Further, since we can tell if a program is standard
  473. by mere inspection, it would be a reasonably easy job to build a
  474. recognizer or syntax checker which could always discover whether
  475. or not a program is standard. Unfortunately, such a recognizer
  476. could not be written in Minimal BASIC itself and this probably
  477. explains why no recognizer to check program conformance has
  478. gained wide acceptance. At least one such recognizer does exist,
  479. however. Called PBASIC [3], it was developed at the University
  480. of Kent at Canterbury and is written in PFORT, a portable subset
  481. of FORTRAN. PBASIC was used to check the syntax of the Version 2
  482. Minimal BASIC Test Programs.
  483. 2.3.2 Implementation Conformance
  484. Implementation conformance is derivative of the more
  485. primitive concept of program conformance. In contrast to the way
  486. in which program conformance is described, processor conformance
  487. is specified functionally, not structurally. The essential
  488. requirement is that an implementation accept any standard program
  489. and produce the behavior specified by the language standard.
  490. That is, the implementation must make the proper connection
  491. between the syntax of the program and the operation of the
  492. computer system. Note that this is a black box description of
  493. the implementation. Whether the realization of the language is
  494. done with a compiler, an interpreter, firmware, or by being
  495. hard-wired, is irrelevant. Only the external behavior is
  496. important, not the internal structure - quite the opposite of the
  497. way program conformance is determined.
  498. The difference in the way conformance is defined for
  499. programs and processors radically affects the test methodology by
  500. which we determine whether the standard is met. The relevant
  501. point is that there currently is no way to be certain that an
  502. implementation does conform to the standard, although we can
  503. sometimes be certain that it does not. In short, there is no
  504. algorithm, such as the recognizer that exists for programs, by
  505. which we can answer definitively the question, "Is this a
  506. standard processor?"
  507. Furthermore, the standard acts as a floor for processors,
  508. rather than a ceiling. That is, an implementation must accept
  509. and process at least all standard programs, but may also
  510. implement enhancements to the language and thus accept
  511. non-standard programs as well. Another difference between
  512. program and processor conformance is that the description of
  513. processor conformance allows for some implementation dependence
  514. even in the treatment of standard programs. Thus for some
  515. standard programs there is no unique semantic meaning, but rather
  516. a set of meanings, usually similar, among which implementations
  517. can choose.
  518. Page 11
  519. 3 DETERMINING IMPLEMENTATION CONFORMANCE
  520. 3.1 Test Programs As Test Data, Not Algorithms
  521. The test programs do not embody some definitive algorithm by
  522. which the question of processor conformance can be answered yes
  523. or no. There is an important sense in which it is only
  524. accidental that they are programs at all; indeed, some of them,
  525. syntactically, are not. Rather their primary function is as test
  526. data. It is readily apparent, for instance, that the majority of
  527. BASIC test programs are algorithmically trivial; some consist
  528. only of a series of PRINT statements. Viewed as test data,
  529. however, i.e., a series of inputs to a system whose behavior we
  530. wish to probe, the underlying motivation for their structure
  531. becomes intelligible. Simply put, it is the goal of the tests to
  532. exercise at least one representative of every meaningfully
  533. distinct type of syntactic structure or semantic behavior
  534. provided for in the language standard. This strategy is
  535. characteristic of testing in general: all one can do is submit a
  536. representative subset of the typically infinite number of
  537. possible inputs to the system under investigation (the
  538. implementation) and see whether the results are in accord with
  539. the specifications for that system (the language standard).
  540. Thus, successful results of the tests are necessary, but not
  541. sufficient to show that the specifications are met. A failed
  542. test shows that a language implementation is not standard. A
  543. passed test shows that it may be. A long series of passed tests
  544. which seem to cover all the various aspects of the language gives
  545. us a large measure of confidence that the implementation conforms
  546. to the standard.
  547. It can scarcely be stressed too strongly that the test
  548. programs do not represent some self-sufficient algorithm which
  549. will automatically deliver correct results to a passive observer.
  550. Rather they are best seen as one component in a larger system
  551. comprising not only the programs, but the documentation of the
  552. programs, the documentation of the processor under test, and, not
  553. least, a reasonably well-informed user who must actively
  554. interpret the results of the tests in the context of some broad
  555. background knowledge about the programs, the processor, and the
  556. language standard. If, for example, a processor rejects a
  557. standard program, it certainly fails to conform to the standard;
  558. yet this is a type of behavior which can hardly be detected by
  559. the program itself: only a human observer who knows that the
  560. processor must accept standard programs, and that this program is
  561. standard, is capable of the proper judgment that this processor
  562. therefore violates the language standard.
  563. Page 12
  564. 3.2 Special Issues Raised By The Standard Requirements
  565. 3.2.1 Implementation-defined Features
  566. At several points in the standard, processors are given a
  567. choice about how to implement certain features. These subjects
  568. of choice are listed in Appendix C of the standard. In order to
  569. conform, implementations must be accompanied by documentation
  570. describing their treatment of these features (see section
  571. 1.4.2(7) of the standard). Many of these choices, especially
  572. those concerning numeric precision, string and numeric overflow,
  573. and uninitialized variables, can have a marked effect on the
  574. result of executing even standard programs. A given program, for
  575. instance, might execute without exceptions on one standard
  576. impleientation, and cause overflow on another, with a notably
  577. different numeric result. The programs that test features in
  578. these areas call for especially careful interpretation by the
  579. user.
  580. Another class of implementation-defined features is that
  581. associated with language enhancements. If an implementation
  582. executes non-standard programs, it also must document the meaning
  583. it assigns to the non-standard constructions within them. For
  584. instance, if an implementation allows comparison of strings with
  585. a less-than operator, it must document its interpretation of this
  586. comparison.
  587. 3.2.2 Error And Exception Reporting
  588. The standard for BASIC, in view of its intended user base of
  589. beginning and casual programmers, attempts to specify what a
  590. conforming processor must do when confronted with non-standard
  591. circumstances. There are two ways in which this can happen: 1)
  592. a program submitted to the processor might not conform to the
  593. standard syntactic rules, or 2) the executing program might
  594. attempt some operation for which there is no reasonable semantic
  595. interpretation, e.g., division by zero, assignment to a
  596. subscripted variable outside of the array. In the BASIC
  597. standard, the first case is called an error, and the second an
  598. exception, and in order to conform, a processor must take certain
  599. actions upon encountering either sort of anomaly.
  600. Given a program with a syntactically non-standard
  601. construction the processor must either reject the program with a
  602. message to the user noting the reason for rejection, or, if it
  603. accepts the program, it must be accompanied by documentation
  604. which describes the interpretation of the construction.
  605. If a condition defined as an exception arises in the course
  606. of execution, the processor is obliged, first to report the
  607. exception, and then to do one of two things, depending on the
  608. type of exception: either it must apply a so-called recovery
  609. procedure and continue execution, or it must terminate execution.
  610. Page 13
  611. Note that it is the user, not the program, who must
  612. determine whether there has been an adequate error or exception
  613. report, or whether appropriate documentation exists. The
  614. pseudo-code in Figure 1 describes how conforming implementations
  615. must treat errors. It may be thought of as an algorithm which
  616. the user (not the programs) must execute in order to interpret
  617. correctly the effect of submitting a test program to an
  618. implementation.
  619. The procedure for error handling in Figure 1 speaks of a
  620. processor accepting or rejecting •-program. The glossary (sec.
  621. 19) of the standard defines accept as "to acknowledge as being
  622. valid". A processor, then, is said to reject a program if it in
  623. some way signifies to the user that an invalid construction (and
  624. not just an exception) has been found, whenever it encounters the
  625. presumably non-standard construction, or if the processor simply
  626. fails to execute the program at all. A processor implicitly
  627. accepts a program if the processor encounters all constructions
  628. within the program with no indication to the user that the
  629. program contains constructions ruled out by the standard or the
  630. implementation's documentation.
  631. In like manner, we can construct pseudo-code operating
  632. instructions to the user, which describe how to determine whether
  633. an exception has been handled in conformance with the standard
  634. and this is shown also in Figure 1.
  635. As a point of clarification, it should be understood that
  636. these categories of error and exception apply to all
  637. implementations, both compilers and interpreters, even though
  638. they are more easily understood in terms of a compiler, which
  639. first does all the syntax checking and then all the execution,
  640. than of an interpreter. There is no requirement, for instance,
  641. that error reports precede exception reports. It is the content,
  642. rather than the timing, of the message that the standard implies.
  643. Messages to reject errors should stress the fact of ill-formed
  644. source code. Exception reports should note the conditions, such
  645. as data values or flow of control, that are abnormal, without
  646. implying that the source code per se is invalid.
  647. Page 14
  648. Error Handling
  649. if program is standard
  650. if program accepted by processor
  651. if correct results and behavior
  652. processor PASSES
  653. else
  654. processor FAILS (incorrect interpretation)
  655. endif
  656. else
  657. processor FAILS (rejects standard program)
  658. endif
  659. else (program non-standard)
  660. if program accepted by processor
  661. if non-standard feature correctly documented
  662. processor PASSES
  663. else
  664. processor FAILS (incorrect/missing documentation
  665. for non-standard feature)*
  666. endif
  667. else (non-standard program rejected)
  668. if appropriate error message
  669. processor PASSES
  670. else
  671. processor FAILS (did not report reason for rejection)
  672. endif
  673. endif
  674. endif
  675. * note that all implementation-defined features must be
  676. documented (See Appendix C in the ANSI Standard) not just
  677. non-standard features.
  678. Exception Handling
  679. if processor reports exception
  680. if procedure is specified for exception
  681. and host system capable of procedure
  682. if processor follows specified procedure
  683. processor PASSES
  684. else
  685. processor FAILS (recovery procedure not followed)
  686. endif
  687. else (no procedure specified or unable to handle)
  688. if processor terminates program
  689. processor PASSES
  690. else
  691. processor FAILS (non-termination on fatal exception)
  692. endif
  693. endif
  694. else
  695. processor FAILS (fail to report exception)
  696. endif
  697. Figure 1
  698. Page 15
  699. 4 STRUCTURE OF THE TEST SYSTEM
  700. The design of the test programs is an attempt to harmonize
  701. several disparate goals: 1) exercise all the individual parts of
  702. the standard, 2) test combinations of features where it seems
  703. likely that the interaction of these features is vulnerable to
  704. incorrect implementation, 3) minimize the number of tests, 4)
  705. make the tests easy to use and their results easy to interpret,
  706. and 5) give the user helpful information about the implementation
  707. even, if possible, in the case of failure of a test. The rest of
  708. this section describes the strategy we ultimately adopted, and
  709. its relationship to conformance and to interpretation by the user
  710. of the programs.
  711. 4.1 Testing Features Before Using Them
  712. Perhaps the most difficult problem of design is to find some
  713. organizing principle which suggests a natural sequence to the
  714. programs. In many ways, the most natural and simple approach is
  715. simply to test the language features in the order they appear in
  716. the standard itself. The major problem with this strategy is
  717. that the tests must then use untested features in order to
  718. exercise the features of immediate interest. This raises the
  719. possibility that the feature ostensibly being tested might
  720. wrongly pass the test because of a flaw in the implementation of
  721. the feature whose validity is implicitly being assumed.
  722. Furthermore, when a test does report a failure, it is not clear
  723. whether the true cause of the failure was the feature under test
  724. or one of the untested features being used.
  725. These considerations seemed compelling enough that we
  726. decided to order the tests according to the principle of testing
  727. features before using them. This approach is not without its own
  728. problems, however. First and most importantly, it destroys any
  729. simple correspondence between the tests and sections of the
  730. standard. The testing of a given section may well be scattered
  731. throughout the entire test sequence and it is not a trivial task
  732. to identify just those tests whose results pertain to the section
  733. of interest. To ameliorate this problem, we have been careful to
  734. note at the beginning of each test just which sections of the
  735. standard it applies to, and have compiled a cross-reference
  736. listing (see section 6.3), so that you may quickly find the tests
  737. relevant to a particular section. A second problem is that
  738. occasionally the programming of a test becomes artificially
  739. awkward because the language feature appropriate for a certain
  740. task hasn't been tested yet. While the programs generally abide
  741. by the test-before-use rule, there are some cases in which the
  742. price in programming efficiency and convenience is simply too
  743. high and therefore a few of the programs do employ untested
  744. features. When this happens, however, the program always
  745. generates a message telling you which untested feature it is
  746. depending on. Furthermore, we were careful to use the untested
  747. Page 16
  748. feature in a simple way unlikely to interact with the feature
  749. under test so as to mask errors in its own implementation.
  750. 4.2 Hierarchical Organization Of The Tests
  751. Within the constraints imposed by the test-before-use rule,
  752. we tried to group together functionally related tests. This
  753. grouping should also help you interpret the tests better since
  754. you can usually concentrate on one part of the standard at a
  755. time, even if the parts themselves are not in order. Section 6.1
  756. of this manual contains a summary of the hierarchical group
  757. structure. It relates a functional subject to a sequential range
  758. of tests and also to the corresponding sections of the standard.
  759. We strongly recommend that you read the relevant sections of the
  760. standard carefully before running the tests in a particular
  761. group. The documentation contained herein explains the rationale
  762. for the tests in each group, but it is not a substitute for a
  763. detailed understanding of the standard itself.
  764. Many of the individual test programs are themselves further
  765. broken down into so-called sections. Thus the overall
  766. hierarchical subdivision scheme is given by, from largest to
  767. smallest: system, groups, sub-groups, programs, sections.
  768. Program sections are further discussed below under: 4.4.3
  769. Documentation.
  770. 4.3 Environment Assumptions
  771. The test programs are oriented towards executing in an
  772. interactive environment, but generally can be run in batch mode
  773. as well. Some of the programs do require input, however, and
  774. these present more of a problem, since the input needed often
  775. depends on the immediately preceding output of the program. See
  776. the sample output in Volume 2 for help in setting up data files
  777. if you plan to run all the programs non-interactively. The
  778. programs which use the INPUT statement are 73, 81, 84, 107-113,
  779. and 203.
  780. We have tried to keep the storage required for execution
  781. within reasonable bounds. Array sizes are as small as possible,
  782. consistent with adequate testing. No program exceeds 300 lines
  783. in length. The programs print many informative messages which
  784. may be changed without affecting the outcome of the tests. If
  785. your implementation cannot handle a program because of its size,
  786. you should set up a temporary copy of the program with the
  787. informative messages cut down to a minimum and use that version.
  788. Be careful not to omit printing which is a substantive part of
  789. the test itself.
  790. Page 17
  791. The tests assume that the implementation-defined margin for
  792. output lines is at least 72 characters long and contains at least
  793. 5 print zones. This should not be confused with the length of a
  794. line in the source code itself. The standard requires
  795. implementations to accept source lines up to 72 characters long.
  796. If the margin is smaller than 72, the tests should still run
  797. (according to the standard), but the output will be aesthetically
  798. less pleasing.
  799. Finally, the standard does not specify how the tests are to
  800. be submitted to the processor for execution. Therefore, the
  801. machine-readable part of the test system consists only of source
  802. code, i.e., there are no system control commands. It is your
  803. responsibility to submit the programs to the implementation in a
  804. natural way which does not violate the integrity of the tests.
  805. 4.4 Operating And Interpreting The Tests
  806. This section will attempt to guide you through the practical
  807. aspects of using the test programs as a tool to measure
  808. implementation conformance. The more general issues of
  809. conformance are covered in section 3, and of course in the
  810. standard itself, especially sections 1 and 2 of the ANSI
  811. document.
  812. 4.4.1 User Checking Vs. Self Checking
  813. All of the test programs require interpretation of their
  814. behavior by the user. As mentioned earlier, the user is an
  815. active component in the test system; the source code of the test
  816. programs is another component, subordinate to the test user. An
  817. important goal in the design of the programs was the minimization
  818. of the need for sophisticated interpretation of the test results;
  819. but minimization is not elimination. In the best case, the
  820. program will print out a conspicuous message indicating that the
  821. test passed or failed, and you need only interpret this message
  822. correctly. In other cases, you have to examine rather carefully
  823. the results and behavior of the program, and must apply the rules
  824. of the standard yourself. This interpretation is necessary in:
  825. 1. Programs which test that PRINTed output is produced in a
  826. certain format
  827. 2. Programs which test that termination occurs at the correct
  828. time (this arises in many of the exception tests)
  829. 3. Programs for which conformance depends on the existence of
  830. adequate documentation of implementation-defined features
  831. (both those defined in Appendix C of the standard and for any
  832. of the error tests that are accepted).
  833. Page 18
  834. The test programs are an only partially automated solution
  835. to the problem of determining processor conformance. Naive
  836. reliance on the test programs alone can very well lead to an
  837. incorrect judgment about whether an implementation meets the
  838. standard.
  839. 4.4.2 Types Of Tests
  840. There are four types of test programs: 1) standard, 2)
  841. exception, 3) error, and 4) informative. Within each of the
  842. functional groups (described above, section 4.2) the tests occur
  843. in that order, although not all groups have all four types. The
  844. rules that pertain to each type follow. It is quite important
  845. that you be aware of which type of test you are running and use
  846. the rules which apply to that type.
  847. 4.4.2.1 Standard Tests
  848. These tests are the ones whose title does not begin with
  849. "EXCEPTION" or "ERROR" and which generate a message about passing
  850. or failing at the end of each section. The paragraph below on
  851. documentation describes the concept of sections of a test. Since
  852. these programs are syntactically standard and raise no exception
  853. conditions, they must be accepted and executed to completion by
  854. the implementation. If the implementation fails to do this, it
  855. has failed the test. For example, if the implementation fails to
  856. recognize the key word OPTION, or if it does not accept one of
  857. the numeric constants, then the test has failed. Quite
  858. obviously, it is you who must apply this rule, since the program
  859. itself won't execute at all.
  860. Assuming that the implementation does process the program,
  861. the next question is whether it has done so correctly. The
  862. program may be able to determine this itself, or you may have to
  863. do some active interpretation of the results. See the section
  864. below on documentation for more detail.
  865. 4.4.2.2 Exception Tests
  866. These tests have titles that begin with the word "EXCEPTION"
  867. and examine the behavior of the implementation when exception
  868. conditions occur during execution. Nonetheless, these programs
  869. are also standard conforming (i.e., syntactically valid) and thus
  870. the implementation must accept and process them.
  871. There are two special considerations. The first is the
  872. distinction between so-called fatal and non-fatal exceptions.
  873. Some exceptions in the standard specify a recovery procedure
  874. which allows continued execution of the program, while others
  875. Page 19
  876. (the fatal exceptions) do not. If no recovery procedure is
  877. specified, the implementation must report the exception and then
  878. terminate the program. Programs testing fatal exceptions will
  879. print out a message that they are about to attempt the
  880. instruction causing the exception. If execution proceeds beyond
  881. that point, the test fails and prints a message so stating. With
  882. the non-fatal exceptions, the test program attempts to discover
  883. whether the recovery procedure has been applied or not and in
  884. this instance, the test is much like the standard tests, where
  885. the question is whether the implementation has followed the
  886. semantic rules correctly. For instance, the semantic meaning of
  887. division by zero is to report the exception, supply machine
  888. infinity, and continue. The standard, however, allows
  889. implementations to terminate execution after even a non-fatal
  890. exception "if restrictions imposed by the hardware or operating
  891. environment make it impossible to follow the given procedures."
  892. Because it would be redundant to keep noting this allowance, the
  893. test programs do not print such a message for each non-fatal
  894. exception. Therefore, when running a test for a non-fatal
  895. exception, note that the implementation may, under the stated
  896. circumstances, terminate the program, rather than apply the
  897. recovery procedure.
  898. The second special consideration is that in the case of
  899. INPUT and numeric and string overflow, the precise conditions for
  900. the exception can be implementation-defined. It is possible,
  901. therefore, that a standard program, executing on two different
  902. standard-conforming processors, using the same data, could cause
  903. an exception in one implementation and not in the other. The
  904. tests attempt to force the exception to occur, but it could
  905. happen, especially in the case of string overflow, that a
  906. syntactically standard program cannot force such an exception in
  907. a given processor. The documentation accompanying the
  908. implementation under test must describe correctly those
  909. implementation-defined features upon which the occurrence of
  910. exceptions depends. That is, it must be possible to find out
  911. from the documentation whether and when overflow and INPUT
  912. exceptions will occur in the test programs.
  913. There is a summary of the requirements for exception
  914. handling in the form of pseudo-code in section 3.2.2 (Figure 1).
  915. 4.4.2.3 Error Tests
  916. These tests have titles that begin with the word "ERROR" and
  917. examine how a processor handles a non-standard program. Each of
  918. these programs contains a syntactic construction explicitly ruled
  919. out by the standard, either in the various syntax sections, or in
  920. the semantics sections. Given a program with a syntactically
  921. non-standard construction the processor must either reject the
  922. program with a message to the user noting the reason for
  923. rejection, or, if it accepts the program, it must be accompanied
  924. by documentation which describes the interpretation of the
  925. Page 20
  926. construction. Testing this requirement involves the submission
  927. of deliberately illegal programs to the processor to see if it
  928. will produce an appropriate message, or if it contains an
  929. enhancement of the language such as to assign a semantic meaning
  930. to the error. Thus we are faced with an interesting selection
  931. problem: out of the infinity of non—standard programs, which are
  932. worth submitting to the processor? Three criteria seem
  933. reasonable to apply:
  934. 1. Test errors which we might expect would be most difficult for
  935. a processor to detect, e.g., violations of context—sensitive
  936. constraints. These are the ones ruled out by the semantics
  937. rather than syntax sections of the standard.
  938. 2. Test errors likely to be made by beginners, for example use
  939. of a two character array name.
  940. 3. Test errors for which there may very well exist a language
  941. enhancement, e.g., comparing strings with "<" and ">".
  942. Based on these criteria, the test system contains programs
  943. for the errors in the two lists which follow. The first list is
  944. for constructions ruled out by the semantics sections alone
  945. (these usually are instances of context—sensitive syntax
  946. constraints) and the second for plausible syntax errors ruled out
  947. by the BNF productions.
  948. Context—sensitive errors:
  949. 1. line number out of strictly ascending order
  950. 2. line number of zero
  951. 3. line—length > 72 characters
  952. 4, use of an undefined user function
  953. 5. use of a function before its definition
  954. 6. recursive function definition
  955. 7. duplicate function definition
  956. 8. number of arguments in function invocation <> number of
  957. parameters in function definition
  958. 9. reference to numeric—supplied—function with incorrect number
  959. of arguments
  960. 10. no spaces around keywords
  961. 11. spaces within keywords and other elements or before line
  962. number
  963. Page 21
  964. 12. non-existent line number for GOTO, GOSUB, IF...THEN,
  965. ON...GOTO
  966. 13. mismatch of control variables in FOR-blocks (e.g.,
  967. interleaving)
  968. 14. nested FOR-blocks with same variable
  969. 15. jump into FOR-block
  970. 16. conflict on number of dimensions among references: A, A(1),
  971. A(1,1)
  972. 17. conflict on number of dimensions between DIM and reference,
  973. e.g., DIM A(20) and either A or A(2,2)
  974. 18. reference to subscripted variable followed by DIMensioning
  975. thereof
  976. 19. multiple OPTION statements
  977. 20. OPTION follows reference to subscripted variable
  978. 21. OPTION follows DIM
  979. 22. OPTION BASE 1 followed by DIM A(0)
  980. 23. DIM of same variable twice
  981. Context-free errors:
  982. 1. use of long name for array, e.g A1(1)
  983. 2. assignment of string to number and number to string
  984. 3. assignment without the keyword LET
  985. 4. comparison of two strings for < or >
  986. 5. comparison of a string with a number
  987. 6. unmatched parenthesis in expression
  988. 7. FOR without matching NEXT and vice-versa
  989. 8. multiple parameters in parameter list
  990. 9. line without line-number
  991. 10. line number longer than four digits
  992. 11. quoted strings containing the quote character or lowercase
  993. letters
  994. Page 22
  995. 12. unquoted strings containing quoted-string-characters
  996. 13. type mismatch on function reference (using string as an
  997. argument)
  998. 14. DEF with string variable for parameter
  999. 15. DEF with multiple parameters
  1000. 16. misplaced or missing END-statement
  1001. 17. null entries in various lists (INPUT, DATA, READ, e.g.)
  1002. 18. use of """ as involution operator
  1003. 19. adjacent operators, such as 2 ^ -4
  1004. When developing programs to test for possible enhancements,
  1005. we also tried to assist the user in confirming what the actual
  1006. processor behavior is, so that it may be checked against the
  1007. documentation. For example, the program that tests whether the
  1008. implementation accepts "<" and ">" for comparison of strings also
  1009. displays the implicit character collating sequence if the
  1010. comparisons are accepted. When the implementation accepts an
  1011. error program be sure to check that the documentation does in
  1012. fact describe the actual interpretation of the error as exhibited
  1013. by the test program. If the error program is rejected, the
  1014. processor's error message should be a reasonably accurate
  1015. description of the erroneous construction.
  1016. There is a summary of the requirements for error handling in
  1017. the form of pseudo-code in section 3.2.2 (Figure 1).
  1018. 4.4.2.4 Informative Tests
  1019. Informative tests are very much like standard tests. The
  1020. implementation must accept and process them, since they are
  1021. syntactically standard. The difference is that the standard only
  1022. recommends, rather than requires, certain aspects of their
  1023. behavior. The pass/fail message (described below) and other
  1024. program output indicates when a test is informative and not
  1025. mandatory. All the informative tests have to do with the quality
  1026. (as opposed to the existence) of various mathematical facilities.
  1027. Specifically, the accuracy of the numeric operations and
  1028. approximated functions and the randomness of the RND function are
  1029. the subjects of informative tests. Some of the standard tests
  1030. also have individual sections which are informative, and again
  1031. the pass/fail message is the key to which sections are
  1032. informative and which mandatory. If numeric accuracy is
  1033. important for your purposes, either as an implementor or a user,
  1034. you should analyze closely the results of the informative tests.
  1035. Page 23
  1036. 4.4.3 Documentation
  1037. There are three kinds of documentation in the test system,
  1038. serving three complementary purposes:
  1039. 1. The user's manual (this document). The purpose of this
  1040. manual is to provide a global description of the test system
  1041. and how it relates to the standard and to conformance. At a
  1042. more detailed level, there is also a description of each
  1043. functional group of programs and the particular things you
  1044. should watch for when running that group.
  1045. 2. Program output. As far as possible, the programs attempt to
  1046. explain themselves and how they must be interpreted to
  1047. determine conformance. Nonetheless, they make sense only in
  1048. the context of some background knowledge of the BASIC
  1049. standard and conformance (more detail below on output
  1050. format).
  1051. 3. Remarks in the source code. Using the REM statement, the
  1052. programs attempt to clarify their own internal logic, should
  1053. you care to examine it. Many of the programs are
  1054. algorithmically trivial enough that remarks are superfluous,
  1055. but otherwise remarks are there to guide your understanding
  1056. of how the programs are intended to work.
  1057. There is a format for program output consistent throughout
  1058. the test sequence. The program first prints its identifying.
  1059. sequence number and title. The next line lists the sections of
  1060. the ANSI standard to which this test applies. After this program
  1061. header, there is general information, if any, pertaining to the
  1062. whole program. Following all this program-level output there is
  1063. a series of one or more sections, numbered sequentially within
  1064. the program number. Each section tests one aspect of the general
  1065. feature being exercised by the program. Every section header
  1066. displays the section number and title and any information
  1067. pertinent to that section. Then the message, "BEGIN TEST."
  1068. appears, after which the program attempts execution of the
  1069. feature under test. At this point, the test may print
  1070. information to help the user understand how execution is
  1071. proceeding.
  1072. Then comes the important part: a message, surrounded by
  1073. asterisks, announcing "*" TEST PASSED ***" or "*** TEST FAILED
  1074. ***II . If the test cannot diagnose its own behavior, it will
  1075. print a conditional pass/fail message, prefacing the standard
  1076. message with a description of what must or must not have happened
  1077. for the test to pass. Be careful to understand and apply these
  1078. conditions correctly. It is a good idea to read the ANSI
  1079. standard with special attention in conjunction with this sort of
  1080. test, so that you can better understand the point of the
  1081. particular section.
  1082. Page 24
  1083. There is no pass/fail message for the error tests, since
  1084. there is, of course, no standard semantics prescribed for a
  1085. non-standard construction. As mentioned above, error programs
  1086. usually generate messages to help you diagnose the behavior of
  1087. the processor when it does accept such a program.
  1088. After the pass/fail message will come a line containing "END
  1089. TEST." which signals that the section is finished. If there is
  1090. another section, the section header will appear next. If not,
  1091. there will be a message announcing the end of the program. Note
  1092. that each section passes or fails independently; all sections,
  1093. not just the last, must print "*** TEST PASSED ***" for the.
  1094. program as a whole to pass. Figure 2 contains a schematic
  1095. outline of standard program output.
  1096. Page 25
  1097. Format of Test Program Output
  1098. PROGRAM FILE nn: descriptive program title.
  1099. ANSI STANDARD xx.x, yy.y ...
  1100. message if a feature is used before being tested, cf. section 4.1
  1101. and general remarks about the purpose of the program
  1102. SECTION nn.1: descriptive section title.
  1103. interpretive message for error or exception tests
  1104. and general remarks about the purpose of this section.
  1105. BEGIN TEST.
  1106. function—specific messages and test results
  1107. *** TEST PASSED (or FAILED) ***or
  1108. *** INFORMATIVE TEST PASSED (or FAILED) ***
  1109. or
  1110. conditional pass/fail message when
  1111. it cannot be determined internally.
  1112. or
  1113. message to assist analysis of processor
  1114. behavior for error program
  1115. END TEST.
  1116. SECTION nn.2: descriptive section title.
  1117. .
  1118. SECTION nn.m: descriptive section title.
  1119. .
  1120. END PROGRAM nn
  1121. Figure 2
  1122. Page 26
  1123. 5 FUNCTIONAL GROUPS OF TEST PROGRAMS
  1124. This section contains information specific to each of the
  1125. groups and sub-groups of programs within the test sequence.
  1126. Groups are arranged hierarchically, as reflected in the numbering
  1127. system. The sub-section numbers within this section correspond
  1128. to the group numbering in the table of section 6.1, e.g., section
  1129. 5.12.1.2 of the manual describes functional group 12.1.2.
  1130. It is the purpose of this section to help you understand the
  1131. overall objectives and context of the tests by providing
  1132. information supplementary to that already in the tests. This
  1133. section will generally not simply repeat information contained in
  1134. the tests themselves, except for emphasis. Where the tests
  1135. require considerable user interpretation, this documentation will
  1136. give you the needed background information. Where the tests are
  1137. self-checking, this documentation will be correspondingly brief.
  1138. We suggest that you first read the comments in this section to
  1139. get the general idea of what the tests are trying to do, read the
  1140. relevant sections of the ANSI standard to learn the precise
  1141. rules, and finally run the programs themselves, comparing their
  1142. output to the sample output in Volume 2. The messages written by
  1143. the test programs are intended to tell you in detail just what
  1144. behavior is necessary to pass, but these messages are not the
  1145. vehicle for explaining how that criterion is derived from the
  1146. standard. Program output should be reasonably intelligible by
  1147. itself, but it is better understood in the broader context of the
  1148. standard and its conformance rules.
  1149. 5.1 Simple PRINTing Of String Constants
  1150. This group consists of one program which tests that the
  1151. implementation is capable of the most primitive type of PRINTing,
  1152. that of string constants and also the null PRINT. Note that it
  1153. is entirely up to you to determine whether the test passes or
  1154. fails by assuring that the program output is consistent with the
  1155. expected output. The program's own messages describe what is
  1156. expected. You may also refer to the sample output in Volume 2 to
  1157. see what the output should look like.
  1158. 5.2 END And STOP
  1159. This group tests the means of bringing BASIC programs to
  1160. normal termination. These capabilities are tested early, since
  1161. all the programs use them. Both END and STOP cause execution to
  1162. stop when encountered, but STOP may appear anywhere in the
  1163. program any number of times. There must be exactly one END
  1164. statement in a program, and it must be the last line in the
  1165. source code. Thus, END serves both as a syntactic marker for the
  1166. end of the program, and is also executable.
  1167. Page 27
  1168. Since the program can't know when it has ended (although it
  1169. can know when it hasn't), you must assure that the programs
  1170. terminate at the right time.
  1171. 5.3 PRINTing And Simple Assignment (LET)
  1172. This group of programs examines the ability of the
  1173. implementation to print strings and numbers correctly. Both
  1174. constants and variables are tested as print-items. The
  1175. variables, of course, have to be given a value before they are
  1176. printed, and this is done with the LET statement.
  1177. PRINT is among the most semantically complex statements in
  1178. BASIC. Furthermore, the PRINT statement is the outstanding case
  1179. of a feature whose operation cannot be checked internally. The
  1180. consequence is that this group calls for the most sophisticated
  1181. user interpretation of any in the test sequence. Please read
  1182. carefully the specifications in the programs, section 12 of the
  1183. ANSI standard, and this documentation; the interpretation of
  1184. test results should then be reasonably clear.
  1185. The emphasis in this group is on the correct representation
  1186. of numeric and string values. There is some testing that TAB,
  1187. comma, and semi-colon perform their functions, but a challenging
  1188. exercise of these features is deferred until group 14.6 because
  1189. of the other features needed to test them.
  1190. 5.3.1 String Variables And TAB
  1191. The PRINTing of strings is fairly straightforward and should
  1192. be relatively easy to check, since there are no
  1193. implementation-defined features which affect the printing. The
  1194. only possible problem is the margin width. The program assumes a
  1195. margin of at least 60 characters with at least 4 print zones. If
  1196. your implementation supports only a shorter margin, you must make
  1197. due allowance for it. The standard does not prescribe a minimum
  1198. margin.
  1199. The string overflow test requires careful interpretation.
  1200. Your implementation must have a defined maximum string length,
  1201. and the fatal exception should occur on the assignment
  1202. corresponding to that documented length. If the implementation
  1203. supports at least 58 characters in the string, overflow should
  1204. not occur. Be sure, if there is no overflow exception report,
  1205. that the processor has indeed not lost data. Do this by checking
  1206. that the output has not been truncated. A processor that loses
  1207. string data without reporting overflow definitely fails.
  1208. Checking for a TAB exception is simple enough; just follow
  1209. the conditional pass/fail messages closely. Note that one
  1210. section of the test should not generate an exception since, even
  1211. Page 28
  1212. though the argument itself is less than one, its value becomes
  1213. one after rounding.
  1214. 5.3.2 Numeric Constants And Variables
  1215. In the following discussion, the terms "significand",
  1216. "exrad", "explicit point", "implicit point", and "scaled" are
  1217. used in accordance with the meaning ascribed them in the ANSI
  1218. standard.
  1219. The rules for printing numeric values are fairly elaborate,
  1220. and, moreover, are heavily implementation-dependent; accordingly
  1221. conscientious scrutiny is in order. There are two rules to keep
  1222. in mind. First, the expected output format depends on the value
  1223. of the print-item, not its source format. In particular, integer
  1224. values should print as integers as long as the significand-width
  1225. can accommodate them, fractional values should print in explicit
  1226. point unscaled format where no loss of accuracy results, and the
  1227. rest should print in explicit point scaled format. For example
  1228. "PRINT 2.1E2" should produce "210" because the item has an
  1229. integer value, even though it is written in source code in
  1230. explicit point scaled format. Second, leading zeros in the exrad
  1231. and trailing zeros in the significand may be omitted. Thus, for
  1232. an implementation with a significand-width of 8 and an
  1233. exrad-width of 3, the value 1,230,000,000 could print as
  1234. "1.2300000E+009" at one extreme or "1.23E+9" at the other. The
  1235. tests generally display the expected output in the latter form,
  1236. but it should be understood that extra zeros can be tacked on to
  1237. the actual output, up to the widths specified for the
  1238. implementation.
  1239. The tests in general are oriented toward the minimum
  1240. requirements of six decimal digits of accuracy, a significand
  1241. length of six and an exrad-width of two. You must apply the
  1242. standard requirements in terms of your own implementation's
  1243. widths, however.
  1244. 5.4 Control Statements And REM
  1245. This group checks that the simple control structures all
  1246. work when used in a simple way. Some of the same facilities are
  1247. checked more rigorously in later groups. As with PRINT, END and
  1248. STOP, these features must come early in the test sequence, since
  1249. a BASIC program cannot do much of consequence without them. If
  1250. any of these tests fail, the validity of much of the rest of the
  1251. test sequence is doubtful, since following tests rely heavily on
  1252. GOTO, GOSUB, and IF. Note especially that trailing blanks should
  1253. be significant in comparing strings, e.g. "ABC" <> "ABC ".
  1254. Subsequent tests which rely on this property of IF will give
  1255. false results if the implementation doesn't process the
  1256. comparison properly.
  1257. Page 29
  1258. The tests for GOTO and GOSUB exercise a variety of transfers
  1259. to make sure the processor handles control correctly. If
  1260. everything works, you should get intelligible, self-consistent
  1261. output. If the output looks scrambled, the test has failed.
  1262. There are no helpful diagnostics for failures since it is
  1263. impossible to anticipate exactly how a processor might
  1264. misinterpret transfers of control. Look carefully at the sample
  1265. output for the GOTO and GOSUB programs in Volume 2, to know what
  1266. to expect.
  1267. The IF...THEN tests use a somewhat complex algorithm, so pay
  1268. attention to the REM statements if you are trying to understand
  1269. the logic. On the other hand, these tests are easy to use
  1270. because they are completely self-checking. You need only look
  1271. for the pass/fail messages to see if they worked. It is worth
  1272. noting that the IF...THEN test for numeric values depends on the
  1273. validity of the IF...THEN test for strings, which comes just
  1274. before.
  1275. The error tests are understandable in light of the general
  1276. rules for interpretation of error programs given earlier.
  1277. 5.5 Variables
  1278. The first of these programs simply checks that the set of
  1279. valid names is as guaranteed by the standard. In particular, A,
  1280. AO, and A$ are all distinct. There are no diagnostics for
  1281. failure, since we expect failures to be rare and it is simple
  1282. enough to isolate the misinterpretation by modifying the program,
  1283. if that proves necessary. A later test in group 8.1 tests that
  1284. the implementation fulfills the requirements for array names.
  1285. Default initialization of variables is one of the most
  1286. important aspects of semantics left to implementation definition.
  1287. Implementations may treat this however they want to, but it must
  1288. be documented, and you should check that the documentation agrees
  1289. with the behavior of the program. Thus this is not merely an
  1290. informative test; the processor must have correct documentation
  1291. for its behavior in order to conform.
  1292. 5.6 Numeric Constants, Variables, And Operations
  1293. 5.6.1 Standard Capabilities
  1294. This group of programs introduces the use of numeric
  1295. expressions, specifically those formed with the arithmetic
  1296. operations (+, -, II, /, A) provided in BASIC. The most
  1297. troublesome aspect of these tests is the explicit disavowal in
  1298. the standard of any criterion of accuracy for the result of the
  1299. operations. Thus it becomes somewhat difficult to say at what
  1300. point a processor fails to implement a given operation. We
  1301. Page 30
  1302. finally decided to require exact results only for integer
  1303. arithmetic, and, in the case of non-integral operands, to apply
  1304. an extremely loose criterion of accuracy such that if an
  1305. implementation failed to meet it, one could reasonably conclude
  1306. either that the precedence rules had been violated or that the
  1307. operation had not been implemented at all.
  1308. Although the standard does not mandate accuracy for
  1309. expressions, it does require that individual numbers be accurate
  1310. to at least six significant decimal digits. This requirement is
  1311. tested by assuring that values which differ by 1 in the 6th digit
  1312. actually compare in the proper order, using the IF statement.
  1313. The rationale for the accuracy test is best explained with an
  1314. example: suppose we write the constant "333.333" somewhere in
  1315. the program. For six digits of accuracy to be maintained, it
  1316. must evaluate internally to some value between 333.3325 and
  1317. 333.3335, since six digits of accuracy implies an error less than
  1318. 5 in the 7th place. By the same reasoning, "333.334" must
  1319. evaluate between 333.3335 and 333.3345. Since the allowable
  1320. ranges do not overlap, the standard requires that 333.333 compare
  1321. as strictly less than 333.334. Of course this same reasoning
  1322. would apply to any two numbers which differed by 1 in the sixth
  1323. digit.
  1324. The accuracy test not only assures that these minimal
  1325. requirements are met, but also attempts to measure how much
  1326. accuracy the implementation actually provides. It does this both
  1327. by comparing some numbers in the manner described above for 7, 8,
  1328. and 9 decimal digits, and also by using an algorithm to compute
  1329. any reasonable internal accuracy. Since such an algorithm is
  1330. highly sensitive to the peculiarities of the system's
  1331. implementation of arithmetic, this last test is informative only.
  1332. 5.6.2 Exceptions
  1333. The standard specifies a variety of exceptions for numeric
  1334. expressions. All the mandatory non-fatal exceptions occur when
  1335. machine infinity is exceeded and they all call for the
  1336. implementation to supply machine infinity as the result and
  1337. continue execution. The tests ensure that machine infinity is at
  1338. least as great as the guaranteed minimum of 1E38, but since
  1339. machine infinity is implementation-defined, you must assure that
  1340. the value actually supplied is accurately documented.
  1341. It is worth repeating here the general guidance that the
  1342. timing of exception reports is not specified by the standard.
  1343. The wording is intentionally imprecise to allow implementations
  1344. to anticipate exceptions, if they desire. Such anticipation may
  1345. well occur for overflow and underflow of numeric constants; that
  1346. is, an implementation may issue the exception report before
  1347. execution of the program begins. Note that the recovery
  1348. procedure, substitution of machine infinity for overflow, remains
  1349. in effect.
  1350. Page 31
  1351. Underflow, whether for expressions or constants, is only
  1352. recommended as an exception, but, in any case, zero must be
  1353. supplied when the magnitude of the result is below the minimum
  1354. representable by the implementation. Note that this is required
  1355. in the semantics sections (7.4 and 5.4) of the standard, not the
  1356. exception sections (7.5 and 5.5).
  1357. 5.6.3 Errors
  1358. These programs try out the effect of various constructions
  1359. which represent either common programming errors (missing
  1360. parentheses) or common enhancements (1111 as the involution
  1361. operator) or a blend of the two (adjacent operators). No special
  1362. interpretation rules apply to these tests beyond those normally
  1363. associated with error programs.
  1364. 5.6.4 Accuracy Tests - Informative
  1365. Although the standard mandates no particular accuracy for
  1366. expression evaluation, such accuracy is nonetheless an important
  1367. measure of the quality of language implementation, and is of
  1368. interest to a large proportion of language users. Accordingly,
  1369. these tests apply a criterion of accuracy for the arithmetic
  1370. operations which is suggested by the standard's requirement that
  1371. individual numeric values be represented accurate to six
  1372. significant decimal digits. Note, however, that these tests are
  1373. informative, not only because there is no strict accuracy
  1374. requirement, but also because there is no generally valid way for
  1375. a computer to measure precisely the accuracy of its own
  1376. operations. Such a measurement involves calculations which must
  1377. use the very facilities being measured.
  1378. The criterion for passing or failing is based on the concept
  1379. that an implementation should be at least as accurate as a
  1380. reasonable hypothetical implementation which uses the least
  1381. accurate numeric representation allowed by the standard. It is
  1382. best explained by first considering accuracy for functions of a
  1383. single variable, and then generalizing to operations, which may
  1384. be thought of as functions of two variables. Given an internal
  1385. precision of at least d decimal digits, we simply require that
  1386. the computed value for f(x) (hereinafter denoted by "cf(x)") be
  1387. some value actually taken on by the function within the domain
  1388. (x-e,x+e), where
  1389. e = 10 " (int (1og10 ( abs (x))) + 1 - d)
  1390. For example, suppose we want to test the value returned by
  1391. sin(29.1234) and we specify that d=6. Then:
  1392. e = 10 " (int (1og10 (29.1234)) + 1 - 6)
  1393. = 10 " (int (1.464) - 5)
  1394. = 1E-4
  1395. Page 32
  1396. and so we require that csin(x) equal some value taken on by
  1397. sin(x) in the interval [29.1233, 29.1235]. This then reduces to
  1398. the test that -.7507297957 <= csin(29.1234) <= -.7505976588.
  1399. The motivation for the formula for e is as follows.
  1400. According to the rule for accuracy of numbers, the internal
  1401. representation of the argument must lie within [x - e/2, x +
  1402. e/2]. Now suppose that the internal representation is near an
  1403. endpoint of the legal interval, and that the granularity of the
  1404. machine (i.e., the difference between adjacent internal numeric
  1405. representations) in that region of the real number line is near e
  1406. (which would be the coarsest allowed, given accuracy of d
  1407. digits). Given this worst case, we would still want a value
  1408. returned for which the actual argument was closer to that
  1409. internal representation than to immediately adjacent
  1410. representations. This means that we allow for a variation of e/2
  1411. when the argument is converted from source to internal form, and
  1412. another variation of e/2 around the internal representation
  1413. itself. The maximum allowable variation along the x-axis is then
  1414. simply the sum of the worst-case variations: e/2 + e/2 = e.
  1415. This is reasonable if we think of a given internal form as
  1416. representing not only a point on the real number line, but the
  1417. set of points for which there is no closer internal form. Then,
  1418. all we know is that the source argument is somewhere within that
  1419. set and all we require is that the computed value of the function
  1420. be true for some (probably different) argument within the set.
  1421. For accuracy d, the maximum width of the set is of course e.
  1422. It should be noted that the first allowed variation of e/2
  1423. is inherent in the process of decimal (source) to, e.g., binary
  1424. (internal) conversion. The case for allowing a variation of e/2
  1425. around the internal representation itself is somewhat weaker. If
  1426. one insists on exact results within the internal numerical
  1427. manipulation, then the function would be allowed to vary only
  1428. within the domain [x - e/2, x + e/2], but we did not require this
  1429. in the tests.
  1430. Note that the above scheme not only allows for the discrete
  1431. nature of the machine, but also for numeric instability in the
  1432. function itself. Mathematically, if the value of an argument is
  1433. known to six places, it does not follow that the value of the
  1434. function is known to six places; the error may be considerably
  1435. more or less. For example, a function is often very stable near
  1436. where its graph crosses the y-axis, but not the x-axis (e.g. COS
  1437. (1E-22)) and very unstable where it crosses the x-axis but not
  1438. the y-axis (e.g. SIN (21.99)). By allowing the cf(x) to take on
  1439. any value in the specified domain, we impose strict accuracy
  1440. where it can be achieved, and permit low accuracy where
  1441. appropriate. Thus, the pass/fail criterion is independent of
  1442. both the argument and function; it reflects only how well the
  1443. implementation computed, relative to a worst-case six-digit
  1444. machine.
  1445. Page 33
  1446. Finally, we must recognize that even if the value of a
  1447. function is computable to high accuracy (as with COS (1E-22)),
  1448. the graininess of the machine will again limit how accurately the
  1449. result itself can be represented. For this reason, there is an
  1450. additional allowance of e/2 around the result. This implies that
  1451. even if the result is computable to, say, 20 digits, we never
  1452. require more than 6 digits of accuracy.
  1453. Now all the preceding comments generalize quite naturally to
  1454. functions of many variables. We can then be guided in our
  1455. treatment of the arithmetic operations by the above remarks on
  1456. functions, if we recall that the operations may be thought of as
  1457. functions of two variables, namely their operands. If we think
  1458. of, say, subtraction as such a function (i.e. subtract (x,y) =
  1459. x-y), then the same considerations of argument accuracy and
  1460. mathematical stability pertain. Thus, we allow both operands to
  1461. vary within their intervals, and simply require the result of the
  1462. operation to be within the extreme values so generated. Note
  1463. that such a technique would be necessary for any of the usual
  1464. functions which take two variables, such as some versions of
  1465. arctan.
  1466. It should be stressed that the resulting accuracy tests
  1467. represent only a very minimal requirement. The design goal was
  1468. to permit even the grainiest machine allowed by the standard to
  1469. pass the tests; all conforming implementations, then, are
  1470. inherently capable of passing. Many users will wish to impose
  1471. more stringent criteria. For example, those interested in high
  1472. accuracy, or implementors whose machines carry more than six
  1473. digits, should examine closely the computed value and true value
  1474. to see if the accuracy is what they expect.
  1475. 5.7 FOR-NEXT
  1476. The ANSI standard provides a loop capability, along with an
  1477. associated control-variable, through the use of the FOR
  1478. statement. The semantic requirements for this construction are
  1479. particularly well-defined. Specifically, the effect of the FOR
  1480. is described in terms of more primitive language features (IF,
  1481. GOTO, LET, and REM), which are themselves not very vulnerable to
  1482. misinterpretation. The tests accordingly are quite specific and
  1483. extensive in the behavior they require. The standard tests are
  1484. completely self-checking, since conformance depends only on the
  1485. value of the control-variable and number of times through the
  1486. loop. The general design plan was not only to determine passing
  1487. or failing, but also to display information allowing the user to
  1488. examine the progress of execution. This should help you diagnose
  1489. any problems. Note especially the requirement that the control
  1490. variable, upon exit from the loop, should have the first unused,
  1491. not the last used, value.
  1492. Page 34
  1493. The FOR statement has no associated exceptions, but it does
  1494. have a rich variety of errors, many of them context sensitive,
  1495. and therefore somewhat harder for an implementation to detect.
  1496. As always, if any error programs are accepted, the documentation
  1497. must specify what meaning the implementation assigns to them.
  1498. 5.8 Arrays
  1499. 5.8.1 Standard Capabilities
  1500. The standard provides for storing numeric values in one- or
  1501. two-dimensional arrays. The tests for standard capabilities are
  1502. all self-checking and quite straightforward in exercising some
  1503. feature defined •n the standard. Note the requirement that
  1504. subscript values be rounded to integers; the program testing
  1505. this must not cause an exception or the processor fails.
  1506. 5.8.2 Exceptions
  1507. The exception tests ensure that the subscript out of range
  1508. condition is handled properly. Note that it is here that the
  1509. real semantic meaning of OPTION and DIM are exercised; they have
  1510. little effect other than to cause or prevent the subscript
  1511. exception for certain subscript values. Since this is a fatal
  1512. exception, you must check (since the program cannot) that the
  1513. programs terminate at the right time, as indicated in their
  1514. messages.
  1515. 5.8.3 Errors
  1516. As with the FOR statement, there are a considerable number
  1517. of syntactic restrictions. The thrust of these restrictions is
  1518. to assure that OPTION precedes DIM, that DIM precedes references
  1519. to the arrays that it governs, and that declared subscript bounds
  1520. are compatible.
  1521. Three of the error programs call for INPUT from the user.
  1522. This is to help you diagnose the actual behavior of the
  1523. implementation if it accepts the programs. The first of these,
  1524. #73, lets you try to reference an array with a subscript of 0 or
  1525. 1 when OPTION BASE 1 and DIM A(0) have been specified, to see
  1526. when an exception occurs.
  1527. The second, #81, allows you to try a subscript of 0 or 1 for
  1528. an array whose DIM statement precedes the OPTION statement.
  1529. The third program using INPUT, #84, is a bit more complex
  1530. and has to do with double dimensioning. If there are two DIM
  1531. statements for the same array, the implementation has a choice of
  1532. Page 35
  1533. several plausible interpretations. We have noted five such
  1534. possibilities and have attempted to distinguish which, if any,
  1535. seems to apply. Since the only semantic effect of DIM is to
  1536. cause or prevent an exception for a given array reference,
  1537. however, it is necessary to run the program three times to see
  1538. when exceptions occur and when they don't, assuming the processor
  1539. hasn't simply rejected the program outright. Your input-reply
  1540. simply tells the program which of the three executions it is
  1541. currently performing. For each execution, you must note whether
  1542. an exception occurred or not and then match the results against
  1543. the table in the program. Suppose, for instance, that you get an
  1544. exception the first time but not the second or third. That would
  1545. be incompatible with all five interpretations except number 4,
  1546. which is that the first DIM statement executed sets the size of
  1547. the array and it is never changed thereafter. As usual, check
  1548. the documentation to make sure it correctly describes what
  1549. happens.
  1550. 5.9 Control Statements
  1551. This group fully exploits the properties of some of the
  1552. control facilities which were tested in a simpler way in group 4.
  1553. As before, there seemed no good way to provide diagnostics for
  1554. failure of standard tests, since the behavior of a failing
  1555. processor is impossible to predict. Passing implementations will
  1556. cause the "*** TEST PASSED ***" message to appear, but certain
  1557. kinds of failures might cause the programs to abort, without
  1558. producing a failure message. Check Volume 2 for an example of
  1559. correct output.
  1560. 5.9.1 GOSUB And RETURN
  1561. Most of the tests in this group are self-explanatory, but
  1562. the one checking address stacking deserves some comment. The
  1563. standard describes the effect of issuing GOSUBs and RETURNS in
  1564. terms of a stack of return addresses, for which the GOSUB adds a
  1565. new address to the top, and the RETURN uses the most recently
  1566. added address. Thus, we get a kind of primitive recursion in the
  1567. control structure (although without any stacking of data). Note
  1568. that this description allows complete freedom in the placement of
  1569. GOSUBs and RETURNs in the source code. There is no static
  1570. association of any RETURN with any GOSUB. The test which
  1571. verifies this specification computes binomial coefficients, using
  1572. the usual recursive formula. The logic of the program is a bit
  1573. convoluted, but intentionally so, in order to exercise the
  1574. stacking mechanism vigorously.
  1575. Page 36
  1576. 5.9.2 ON-GOTO
  1577. The ON-GOTO tests are all readily understandable. The one
  1578. thing you might want to watch for is that the processor rounds
  1579. the expression controlling the ON-GOTO to the nearest integer, as
  1580. specified in the standard. Thus, "ON .6 GOTO", "ON 1 GOTO", and
  1581. "ON 1.4 GOTO" should all have the same effect; there should be
  1582. no out of range exception for values between .5 and 1.
  1583. 5.10 READ, DATA, And RESTORE
  1584. This group tests the facilities for establishing a stream of
  1585. data in the program and accessing it sequentially. This feature
  1586. has some subtle requirements, and it would be wise to read the
  1587. standard especially carefully so that you understand the purpose
  1588. of the tests.
  1589. 5.10.1 Standard Capabilities
  1590. All but the last of these tests are reasonably simple. The
  1591. last test dealing with the general properties of READ and DATA,
  1592. although self-checking, has somewhat complex internal logic. It
  1593. assures that the range of operands of READ and DATA can overlap
  1594. freely and that a given datum can be read as numeric at one time
  1595. and as a string at a later time. If you need to examine the
  1596. internal logic closely, be sure to use the REM statements at the
  1597. beginning which break down the structure of the READ and DATA
  1598. lists for you.
  1599. 5.10.2 Exceptions
  1600. The exceptions can be understood directly from the programs.
  1601. Note that string overflow may or may not occur, depending on the
  1602. implementation-defined maximum string length. If overflow (loss
  1603. of data) does occur, the processor must report an exception and
  1604. execution must terminate. If there is no exception report, look
  1605. carefully at the output to assure that no loss of data has
  1606. occurred.
  1607. 5.10.3 Errors
  1608. All of the error tests display results if the implementation
  1609. accepts them, allowing you to check that the documentation
  1610. matches the actual behavior of the processor. Some of the
  1611. illegal constructs are likely candidates for enhancements and
  1612. thus the diagnostic feature is important here.
  1613. • Page 37
  1614. 5.11 INPUT
  1615. This group, like that for PRINT, calls for a good deal of
  1616. user participation. This participation takes the form, not only
  1617. of interpreting program output but also of supplying appropriate
  1618. INPUT replies. The validity of this group depends strongly on
  1619. the entry of correct replies.
  1620. 5.11.1 Standard Capabilities
  1621. The first program assures that the processor can accept as
  1622. input any syntactically valid number. It is absolutely
  1623. essential, then, that you reply to each message with precisely
  1624. the same set of characters that it asks for. If it tells you to
  1625. enter "1.E22" you must not reply with, e.g. "1E22". This would
  1626. defeat one of the purposes of that reply, which is to see whether
  1627. the processor correctly handles a decimal point immediately
  1628. before the exponent. Once you have correctly entered the reply,
  1629. one of several things can happen. If the processor in some way
  1630. rejects the reply, for instance by producing a message that it is
  1631. not a valid number, then the processor has failed the test since
  1632. all the replies are in fact valid according to the standard. To
  1633. get by this problem, simply enter any number not numerically
  1634. equal to the one originally requested. This will let you get on
  1635. to the other items, and will signal a failure to the processor as
  1636. described below.
  1637. If the processor accepts the reply, the program then tests
  1638. that six digits of accuracy have been preserved. If so, you will
  1639. get a message that the test is OK, and may go on to the next
  1640. reply. If not, you will get a message indicating that the
  1641. correct value was not received, and the program will ask if you
  1642. want to retry that item. If you simply mistyped the original
  1643. input-reply, you should enter the code for a retry. If your
  1644. original reply was correct, but the processor misinterpreted the
  1645. numeric value, there is no point to retrying; just go ahead to
  1646. the next item. The program will count up all the failures and
  1647. report the total at the end of the program.
  1648. The next program, for array input, assures that you can
  1649. enter numbers into an array, and that assignments are done left
  1650. to right, so that a statement such as "INPUT I, A(I)" allows you
  1651. to control which element of the array gets the value. Also, it
  1652. is here (and only here) that the standard's requirement for
  1653. checking the input-reply before assignment is tested. Your first
  1654. reply to this section of the test must cause an exception, and
  1655. you must be allowed to re-enter the entire reply, otherwise the
  1656. test fails. The rest of the program is self-checking.
  1657. The program for string input comes next and, as with the
  1658. numeric input program two considerations are paramount: 1) you
  1659. should enter your replies exactly as indicated in the message and
  1660. 2) all input replies are syntactically valid and therefore if the
  1661. Page 38
  1662. implementation rejects any of them, it fails the test. A
  1663. potentially troublesome aspect of this program is that the
  1664. prompting message cannot always look exactly like your reply. In
  1665. particular, your replies will sometimes include blanks and
  1666. quotes. It is impossible to PRINT the quote character in Minimal
  1667. BASIC, so the number-sign (1) is used instead. For ease of
  1668. counting characters, an equals (=) is used in the message to
  1669. represent blanks. Therefore, when you see the number-sign, type
  1670. the quote and when you see the equals, type the blank. If you
  1671. forget, the item will fail and you will have a chance to retry,
  1672. so make sure that a reported failure really is a processor
  1673. failure and not just your own mistyping before bypassing the
  1674. retry. As with the numeric input, if the processor rejects one
  1675. of the replies, simply enter any reply whose evaluation is not
  1676. equal to that of the prompting message to bypass the item and
  1677. force a failure. The second section of the string input program
  1678. does not use the substitute characters in the message; rather
  1679. you always type exactly what you see in the message surrounded by
  1680. quotes.
  1681. The program for mixed input follows the conventions
  1682. individually established by the numeric and string input
  1683. programs. Its purpose is simply to assure that the
  1684. implementation can handle both string and numeric data in the
  1685. same reply.
  1686. 5.11.2 Exceptions
  1687. Unlike the other groups, where each exception type is tested
  1688. with its own program, all the mandatory exceptions for INPUT are
  1689. gathered into one routine. There are two reasons for this:
  1690. first, there are so many variations worth trying that a separate
  1691. program for each would be impractical, and second, the recovery
  1692. procedures are the same for all input exception types. It is,
  1693. then, both economical and convenient to group together all the
  1694. various possibilities into one program. Underflow on INPUT is an
  1695. optional exception and has a different recovery procedure,
  1696. governed by the semantics for numeric constants rather than
  1697. INPUT. It, therefore, is tested in its own separate program.
  1698. The conformance requirements for input exceptions are
  1699. perhaps the most complex of any in the standard. It is
  1700. worthwhile to review these requirements in some detail, and then
  1701. relate them to the test. The standard says that
  1702. "unquoted-strings that are numeric-constants must be supplied as
  1703. input for numeric-variables, and either quoted-strings or
  1704. unquoted-strings must be supplied as input for string-variables."
  1705. Since the syntactic entities mentioned are well-defined in the
  1706. standard, this specification seems clear enough. Recall,
  1707. however, that processors can, in general, enlarge the class of
  1708. syntactic objects which they accept. In particular, a processor
  1709. may have an enhanced definition of quoted-string,
  1710. unquoted-string, numeric-constant, or, more generally,
  1711. Page 39
  1712. input-reply, and therefore accept a reply not strictly allowed by
  1713. the standard, just as standard implementations may accept, and
  1714. render meaningful, non-standard programs. The result is that the
  1715. conditions for an input exception may depend' on
  1716. implementation-defined features, and thus a given input-reply may
  1717. cause an exception for one processor and yet not another. Note
  1718. that the same situation prevails for overflow - the exception
  1719. depends on the implementation-defined maximum string length and
  1720. machine infinity. Thus, "LET A = 1E37 * 100" may cause overflow
  1721. on one standard processor, but not another.
  1722. When running the program then, a given input-reply need not
  1723. generate an exception if there is a documented enhancement which
  1724. describes its interpretation. Of course, such an enhancement
  1725. must not change the meaning of any input-reply which is
  1726. syntactically standard. Note that, of the replies called for in
  1727. the program, some are syntactically standard and some are not;
  1728. they should, however, all cause exceptions on a truly minimal
  1729. BASIC processor, i.e. one with no syntactic enhancements, with
  1730. machine infinity = 1E38 and with maximum string length of 18.
  1731. Another problem is that, for some replies, it is not clear
  1732. which exception type applies. If, for instance, you respond to
  1733. "INPUT A,B,C" with: "2„3", it may be taken as a wrong type,
  1734. since a numeric-constant was not supplied for B, or as
  1735. insufficient data, since only two, not three, were supplied. In
  1736. such a case, as with all exception reports, it is sufficient if
  1737. the report is a reasonably accurate description of what went
  1738. wrong, regardless of precisely how the report corresponds to the
  1739. types defined in the standard.
  1740. As with all non-fatal exceptions, it is permitted for an
  1741. implementation to treat a given INPUT exception as fatal, if the
  1742. hardware or operating environment makes the recovery procedure
  1743. impossible to follow. The program is set up with table-driven
  1744. logic, so that each exception is triggered by a set of values in
  1745. a given DATA statement. If you need to separate out some of the
  1746. cases because they cause the program to terminate, simply delete
  1747. the DATA statements for those cases. REM statements in the
  1748. program describe the format of the data.
  1749. After that lengthy preliminary discussion, we are now ready
  1750. to consider how to operate and interpret the test. The program
  1751. will ask you for a reply, and also show you the INPUT statement
  1752. to which it is directed, to help you understand why the exception
  1753. should occur. Enter the exception-provoking reply, exactly as
  1754. requested by the message. If all goes well, the implementation
  1755. will give you an exception report and allow you to re-supply the
  1756. entire input-reply. On this second try, simply enter all zeros,
  1757. exactly as many as needed by the complete original INPUT
  1758. statement, to bypass that case - this will signal the program
  1759. that that case has passed, and you will then receive the next
  1760. message.
  1761. Page 40
  1762. Now, let us look at what might go wrong. If the
  1763. implementation simply accepts the initial input-reply, the
  1764. program will display the resulting values assigned to the
  1765. variables and signal a possible failure. If the documentation
  1766. for the processor describes an enhancement which agrees with the
  1767. actual result, then that case passes; otherwise it is a failure.
  1768. Suppose the implementation reports an exception, but does
  1769. not allow you to re-supply the entire input-reply. At that
  1770. point, just do whatever the processor requires to bypass that
  1771. case. You should supply non-zero input to signal the program
  1772. that the case in question has failed.
  1773. When the program detects an apparent failure (non-zeros in
  1774. the variables) it allows you to retry the whole case. As before,
  1775. if you mistyped you should reply that you wish to retry; if the
  1776. processor simply mishandled the exception, reject the retry and
  1777. move on to the next case.
  1778. Figure 3 outlines the user's operating and interpretation
  1779. procedure for the INPUT exception test.
  1780. 5.11.3 Errors
  1781. There is only one error program and it tests the effect of a
  1782. null entry in the input-list. The usual rules for error tests
  1783. apply (see section 4.4.2.3).
  1784. Page 41
  1785. Instructions for the INPUT exceptions test
  1786. Inspect message from program
  1787. Supply exact copy of message as input-reply
  1788. If processor reports exception
  1789. then
  1790. if processor allows you to re-supply entire reply
  1791. then
  1792. enter all zeros (exactly enough to satisfy original
  1793. INPUT request)
  1794. if processor responds that test passed
  1795. then
  1796. test passed
  1797. else (no pass message after entering zeros)
  1798. zeros not assigned to variables
  1799. test failed (recovery procedure not followed)
  1800. endif
  1801. else (not allowed to re-supply entire reply)
  1802. supply any non-zero reply to bypass this case
  1803. test failed (recovery procedure not followed)
  1804. endif
  1805. else (no exception report)
  1806. if documentation for processor correctly describes syntactic
  1807. enhancement to accept the reply
  1808. then
  1809. test passed
  1810. else (no exception and incorrect/missing documentation)
  1811. test failed
  1812. endif
  1813. endif
  1814. Figure 3
  1815. Page 42
  1816. 5.12 Implementation-supplied Functions
  1817. All conforming implementations must make available to the
  1818. programmer the set of functions defined in section 8 of the ANSI
  1819. standard. The purpose of this group is to assure that these
  1820. functions have actually been implemented and also to measure at
  1821. least roughly the quality of implementation.
  1822. 5.12.1 Precise Functions: ABS,INT,SGN
  1823. These three functions are distinguished among the eleven
  1824. supplied functions in that any reasonable implementation should
  1825. return a precise value for them. Therefore they can be tested in
  1826. a more stringent manner than the other eight which are inherently
  1827. approximate (i.e. a discrete machine cannot possibly supply an
  1828. exact answer for most arguments).
  1829. The structure of the tests is simple: the function under
  1830. test is invoked with a variety of argument values and the
  1831. returned value is compared to the correct result. If all results
  1832. are equal, the test passes, otherwise it fails. The values are
  1833. displayed for your inspection and the tests are self-checking.
  1834. The test for the INT function has a second section which does an
  1835. informative test on the values returned for large arguments
  1836. requiring more than six digits of accuracy.
  1837. 5.12.2 Approximated Functions: SQR,ATN,COS,EXP,LOG,SIN,TAN
  1838. These functions do not typically return rational values for
  1839. rational arguments and thus may only be approximated by digital
  1840. computers. Furthermore, the standard explicitly disavows any
  1841. criterion of accuracy, making it difficult to say when an
  1842. implementation has definitely failed a test. Because of these
  1843. constraints, the non-exception tests in this group are
  1844. informative only. We can, however, quite easily apply the ideas
  1845. developed earlier in section 5.6.4. As explained there, we can
  1846. devise an accuracy criterion for the implementation of a
  1847. function, based on a hypothetical six decimal digit machine. If
  1848. a function returns a value less accurate even than that of which
  1849. this worst-case machine is capable, the informative test fails.
  1850. To repeat the earlier guidance for the numeric operations:
  1851. this approach imposes only a very minimal requirement. You may
  1852. well want to set a stricter standard for the implementation under
  1853. test. For this reason, the programs in this group also compute
  1854. and report an error measure, which gives an estimate of the
  1855. degree of accuracy achieved, again relative to a six-digit
  1856. machine. The error measure thus goes beyond a simple pass/fail
  1857. report and quantifies how well or poorly the function value was
  1858. computed. Of course, the error measure itself is subject to
  1859. inaccuracy in its own internal computation, and no one
  1860. Page 43
  1861. measurement should be taken as precisely correct. Nonetheless,
  1862. when the error measures of all the cases are considered in the
  1863. aggregate, it should give a good overall picture of the quality
  1864. of function evaluation. Since it is based on the same allowed
  1865. interval for values as the pass/fail criterion, it too measures
  1866. the quality of function evaluation independent of the function
  1867. and argument under test. It does depend on the internal accuracy
  1868. with which the implementation can represent numeric quantities:
  1869. the greater the accuracy, the smaller the error measure should
  1870. become. As a rough guide, the error measures should all be <
  1871. 10-(6-d), where d is the number of significant decimal digits
  1872. supported by the implementation (this is determined in the
  1873. standard tests for numeric operations, group 6.1). For instance,
  1874. an eight decimal digit processor should have all error measures <
  1875. .01.
  1876. Another point to be stressed: even though the results of
  1877. these tests are informative, the tests themselves are
  1878. syntactically standard, and thus must be accepted and processed
  1879. bi the implementation. If, for instance, the processor does not
  1880. recognize the ATN function and rejects the program, it definitely
  1881. fails to conform to the standard. This is in contrast to the
  1882. case of a processor which accepts the program, but returns
  1883. somewhat inaccurate values. The latter processor is arguably
  1884. standard-conforming, even if of low quality.
  1885. This group also contains exception tests for those
  1886. conditions so specified in the ANSI standard. Most of these can
  1887. be understood in light of the general guidance given for
  1888. exceptions. The program for overflow of the TAN function
  1889. deserves some comment. Since it is questionable whether overflow
  1890. can be forced simply by encoding pi/2 as a numeric constant for
  1891. the source code argument, the program attempts to generate the
  1892. exception by a convergence algorithm. It may be, however, that
  1893. no argument exists which will cause overflow, so you must verify
  1894. merely that if overflow occurs, then it is reported as an
  1895. exception. For instance, if several of the function calls return
  1896. machine infinity, it is clear that overflow has occurred and if
  1897. there were no exception report in such a case, the test fails.
  1898. Also, as a measure of quality, the returned values with a given
  1899. sign should increase in magnitude until overflow occurs, i.e.
  1900. all the positive values should form an ascending sequence, and
  1901. the negative values a descending sequence.
  1902. 5.12.3 RND And RANDOMIZE
  1903. Unlike the other functions, there is no single correct value
  1904. to be returned by any individual reference to RND, but only the
  1905. properties of an aggregation of returned values are specified.
  1906. The standard says that these values are "uniformly distributed in
  1907. the range 0 <= RND < 1". Also, section 17 specifies that in the
  1908. absence of the RANDOMIZE statement, RND will generate the same
  1909. pseudorandom sequence for each execution of a program;
  1910. Page 44
  1911. conversely, each execution of RANDOMIZE "generates a new
  1912. unpredictable starting point" for the sequence produced by RND.
  1913. The RND tests follow closely the strategy put forth in chapter
  1914. 3.3.1 of Knuth's The Art of Computer Programming (4], which
  1915. explains fully the rationale for the programs in this group.
  1916. 5.12.3.1 Standard Capabilities
  1917. The first two programs test that the same sequence or a
  1918. novel sequence appear as appropriate, depending on whether
  1919. RANDOMIZE has executed. Note that you must execute both of these
  1920. programs three times apiece, since the RND sequence is
  1921. initialized by the implementation only when execution begins.
  1922. The next three programs all test properties of the sequence which
  1923. follow directly from the specification that it is uniformly
  1924. distributed in the range 0 <= RND < 1. If the results make it
  1925. quite improbable that the distribution is uniform, or if any
  1926. value returned is outside the legal range, then the test fails.
  1927. Of course, any implementation could pass simply by adjusting the
  1928. RND algorithm or starting point until a passing sequence is
  1929. generated. In order to measure the quality of implementation,
  1930. you can run the programs with a RANDOMIZE statement in the
  1931. beginning and then observe how often the test passes or fails.
  1932. Note that, if you use RANDOMIZE, these programs should fail a
  1933. certain proportion of the time since they are probabilistic
  1934. tests.
  1935. 5.12.3.2 Informative Tests
  1936. There are several desirable properties of a sequence of
  1937. pseudorandom numbers which are not strictly implied by uniform
  1938. distribution. If, for instance, the numbers in the sequence
  1939. alternated between being <= .5 and > .5, they might still be
  1940. uniform, but would be non-random in an important way. These
  1941. tests attempt to measure how well the implementation has
  1942. approached the ideal of a perfectly random sequence by looking
  1943. for patterns indicative of nonrandomness in the sequence actually
  1944. produced. Like the tests for standard capabilities, these
  1945. programs are probabilistic and any one of them may fail without
  1946. necessarily implying that the RND sequence is not random. If a
  1947. high quality RND function is important for your purposes, we
  1948. suggest you run each of these programs several times with the
  1949. RANDOMIZE statement. If a given test seems to fail far more
  1950. often than likely, it may well indicate a weakness in the RND
  1951. algorithm.
  1952. Page 45
  1953. 5.12.4 Errors
  1954. The tests in this group all use an argument-list which is
  1955. incorrect in some way, either for the particular function, or
  1956. because of the general rules of syntax. As always, if the
  1957. processor does accept any of them, the documentation must be
  1958. consistent with the actual results. Note that the ANSI standard
  1959. contains a misprint, indicating that the TAN function takes no
  1960. arguments. The tests are written to treat TAN as a function of a
  1961. single variable.
  1962. 5.13 User-defined Functions
  1963. The standard provides a facility so that programmers can
  1964. define functions of a single variable in the form of a numeric
  1965. expression. This group of tests exercises both the invoking
  1966. mechanism (function references) and the defining mechanism (DEF
  1967. statement).
  1968. 5.13.1 Standard Capabilities
  1969. These programs test a variety of properties guaranteed by
  1970. the standard: the DEF statement must allow any numeric
  1971. expression as the function definition; the parameter, if any,
  1972. must not be confused with a global variable of the same name;
  1973. global variables, other than one with the same name as the
  1974. parameter, are available to the function definition; a DEF
  1975. statement in the path of execution has no effect; invocation of
  1976. a function as such never changes the value of any variable; the
  1977. set of valid names for user-defined functions is "FN" followed by
  1978. any alphabetic character. The tests are self-checking. As with
  1979. the numeric operations, a very loose criterion of accuracy is
  1980. used to check the implementation. Its purpose is not to check
  1981. accuracy as such, but only to assure that the semantic behavior
  1982. accords with the standard.
  1983. 5.13.2 Errors
  1984. Many of these tests are similar to the error tests for
  1985. implementation-supplied functions, in that they try out various
  1986. malformed argument lists. There are also some tests involving
  1987. the DEF statement, in particular for the requirements that a
  1988. program contain exactly one DEF statement for each user function
  1989. referred to in the program and that the definition precede any
  1990. references.
  1991. Page 46
  1992. 5.14 Numeric Expressions
  1993. Numeric expressions have a somewhat special place in the
  1994. Minimal BASIC standard. They are the most complex entity,
  1995. syntactically, for two reasons. First, the expression itself may
  1996. be built up in a variety of ways. Numeric constants, variables,
  1997. and function references are combined using any of five
  1998. operations. The function references themselves may be to
  1999. user-defined expressions. And of course expressions can be
  2000. nested, either implicitly, or explicitly with parentheses.
  2001. Second, not only do the expressions have a complex internal
  2002. syntax, but also they may appear in a number of quite different
  2003. contexts. Not just the LET statement, but also the IF, PRINT,
  2004. ON...GOTO, and FOR statements, can contain expressions. Also
  2005. they may be used as array subscripts or as arguments in a
  2006. function reference. Note that when they are used in the
  2007. ON...GOTO, as subscripts, or as arguments to TAB, expressions
  2008. must be rounded to the nearest integer.
  2009. The overall strategy of the test system is first to assure
  2010. that the elements of numeric expressions are handled correctly,
  2011. then to try out increasingly complex expressions in the
  2012. comparatively simple context of the LET statement, and finally to
  2013. verify that these complex expressions work properly in the other
  2014. contexts mentioned. Preceding groups have already accomplished
  2015. the first task of checking out individual expression elements,
  2016. such as constants, variables (both simple and array), and
  2017. function references. This group completes the latter two steps.
  2018. 5.14.1 Standard Capabilities In Context Of LET-statement
  2019. This test tries out various lengthy expressions, using the
  2020. full generality allowed by the standard, and assigns the
  2021. resulting value to a variable. As usual, if this value is even
  2022. approximately correct, the test passes, since we are interested
  2023. in semantics rather than accuracy. The program displays the
  2024. correct value and actual computed value. This test also verifies
  2025. that subscript expressions evaluate to the nearest integer.
  2026. 5.14.2 Expressions In Other Contexts: PRINT, IF, ON-GOTO, FOR
  2027. Please note that the PRINT test, like other PRINT tests, is
  2028. inherently incapable of checking itself, and therefore you must
  2029. inspect and interpret the results. The PRINT program first tests
  2030. the use of expressions as print-items. Check that the actual and
  2031. correct values are reasonably close. The second section of the
  2032. program tests that the TAB call is handled correctly. Simply
  2033. verify that the characters appear in the appropriate columns.
  2034. Page 47
  2035. The second program is self-checking and tests IF, ON-GOTO
  2036. and FOR, one in each section. As with other tests of control
  2037. statements, the diagnostics are rather sparse for failures.
  2038. Check Volume 2 for an example of correct output.
  2039. 5.14.3 Exceptions In Subscripts And Arguments
  2040. The exceptions specified in section 7 and 8 apply to numeric
  2041. expressions in whatever context they occur. These tests simply
  2042. assure that the correct values are supplied, e.g., machine
  2043. infinity for overflow, zero for underflow, and that the execution
  2044. continues normally as if that value had been put in that context
  2045. as, say, a numeric constant. Sometimes this action will produce
  2046. normal results and sometimes will trigger another exception,
  2047. e.g., machine infinity supplied as a subscript. Simply verify
  2048. that the exception reports are produced as specified in the
  2049. individual tests.
  2050. 5.14.4 Exceptions In Other Contexts: PRINT, IF, ON-GOTO, FOR
  2051. As in the immediately preceding section, these tests make
  2052. sure that the recovery procedures have the natural effect given
  2053. the context in which they occur. As usual for exception tests,
  2054. it is up to you to verify that reasonable exception reports
  2055. appear. The PRINT tests also require user interpretation to some
  2056. degree.
  2057. 5.15 Miscellaneous Checks
  2058. This group consists mostly of error tests in which the error
  2059. is tied not to some specific functional area but rather to the
  2060. general format rules for BASIC programs. If you are not already
  2061. thoroughly familiar with the general criteria for error tests, it
  2062. would be wise to review them (sections 3.2.2 and 4.4.2.3 of this
  2063. document) before going through this group. A few tests require
  2064. special comment and this is supplied below in the appropriate
  2065. subsection.
  2066. 5.15.1 Missing Keyword
  2067. Many implementations of BASIC allow programs to omit the
  2068. keyword LET in assignment statements. This program checks that
  2069. possibility and reports the resulting behavior if accepted.
  2070. Page 48
  2071. 5.15.2 Spaces
  2072. Sections 3 and 4 of the ANSI standard specify several
  2073. context sensitive rules for the occurrence of spaces in a BASIC
  2074. program. The standard test assures that wherever one space may
  2075. occur, several spaces may occur with no effect, except within a
  2076. quoted- or unquoted-string. There are certain places where
  2077. spaces either must, or may, or may not appear, and the error
  2078. programs test how the implementation treats various violations of
  2079. the rules.
  2080. 5.15.3 Quotes
  2081. These programs test the effect of using either a single or
  2082. double quote in a quoted string. Some processors may interpret
  2083. the double quote as a single occurrence of the quote character
  2084. within the string. The programs test the effect of aberrant
  2085. quotes in the context of the PRINT and the LET statements.
  2086. 5.15.4 Line Numbers
  2087. The first of these programs is a standard, not an error,
  2088. test. It verifies that leading zeros in line numbers have no
  2089. effect. The other programs all deal with some violation of the
  2090. syntax rules for line numbers. When submitting these programs to
  2091. your implementation, you should not explicitly call for any
  2092. sorting or renumbering of lines. If the implementation sorts the
  2093. lines by default, even when the program is submitted to it in the
  2094. simplest way, the documentation must make this clear. Such
  2095. sorting merely constitutes a particular type of syntactic
  2096. enhancement, i.e., to treat a program with lines out of order as
  2097. if they were in order. Similarly, an implementation may discard
  2098. duplicate lines, or append line numbers to the beginning of lines
  2099. missing them, as long as these actions occur without special user
  2100. intervention and are documented. Of course, processors may also
  2101. reject such programs, with an error message to the user.
  2102. 5.15.5 Line Longer Than 72 Characters
  2103. This program tests the implementation's reaction to a line
  2104. whose length is greater than the standard limit of 72. Many
  2105. implementations accept longer lines; if so the documentation
  2106. must specify the limit.
  2107. Page 49
  2108. 5.15.6 Margin Overflow For Output Line
  2109. This is not an error test, but a standard one. Further, it
  2110. involves PRINT capabilities and therefore calls for careful user
  2111. interpretation. Its purpose is to assure correct handling of the
  2112. margin and print zones, relative to the implementation-defined
  2113. length for each of those two entities. After you have entered
  2114. the appropriate values, the program will generate pairs of
  2115. output, with either one or two printed lines for each member of
  2116. the pair. The first member is produced using primitive
  2117. capabilities of PRINT and is intended to show what the output
  2118. should look like. The second member of the pair is produced
  2119. using the facilities under teat and shows what the output
  2120. actually looks like. If the two members differ at all, the test
  2121. fails. It could happen, however, that the first member of the
  2122. pair does not produce the correct output either. You should,
  2123. therefore, closely examine the sample output for this test in
  2124. Volume 2 to understand what the expected output is. Of course
  2125. the sample is exactly correct only for implementations with the
  2126. same margin and zone width, but allowing for the possibly
  2127. different widths of your processor, the sample should give you
  2128. the idea of what your processor must do.
  2129. 5.15.7 Lowercase Characters
  2130. These two tests tell you whether your processor can handle
  2131. lowercase characters in the program, and, if so, whether they are
  2132. converted to uppercase or left as lowercase.
  2133. 5.15.8 Ordering Strings
  2134. This program tests whether your implementation accepts
  2135. comparison operators other than the standard = or <> for strings.
  2136. If the processor does accept them, the program assumes that the
  2137. interpretation is the intuitively appealing one and prints
  2138. informative output concerning the implicit character collating
  2139. sequence and also some comparison results for multi-character
  2140. strings.
  2141. 5.15.9 Mismatch Of Types In Assignment
  2142. These programs check whether the processor accepts
  2143. assignment of a string to a numeric variable and vice-versa, and
  2144. if so what the resulting value of the receiving variable is. As
  2145. usual, make sure your documentation covers these cases if the
  2146. implementation accepts these programs.
  2147. Page 50
  2148. 6 TABLES OF SUMMARY INFORMATION ABOUT THE TEST PROGRAMS
  2149. This section contains three tables which should help you
  2150. find your way around the programs and the ANSI standard. The
  2151. first table presents the functional grouping of the tests and
  2152. shows which programs are in each group and the sections of the
  2153. ANSI standard whose specifications are being tested thereby. The
  2154. second table lists all the programs individually by number and
  2155. title, and also the particular sections and subsections of the
  2156. standard to which they apply. The third table lists the sections
  2157. and subsections of the standard in order, followed by a list of
  2158. program numbers for those sections. This third table is
  2159. especially important if you want to test the implementation of
  2160. only certain parts of the standard. Be aware, however, that
  2161. since the sections of the standard are not tested in order, the
  2162. tests for a given section may rely on the implementation of later
  2163. sections in the standard which have been tested earlier in the
  2164. test sequence.
  2165. Page 51
  2166. 6.1 Group Structure Of The Minimal BASIC Test Programs
  2167. Program ANSI
  2168. Group Number Section
  2169. 1 Simple PRINTing of string constants 1 (3,5,12)
  2170. 2 END and STOP 2-5 (4,10)
  2171. 2.1 END 2-4 (4)
  2172. 2.2 STOP 5 (10)
  2173. 3 PRINTing and simple assignment (LET) 6-14 (5,6.9,12)
  2174. 3.1 string variables and TAB 6-8 (6,9,12)
  2175. 3.2 numeric constants and variables 9-14 (5,6,9,12)
  2176. 4 Control Statements and REM 15-21 (10,18)
  2177. 4.1 REM and GOTO 15-16 (10,18)
  2178. 4.2 GOSUB and RETURN 17 (10)
  2179. 4.3 IF-THEN 18-21 (10)
  2180. 5 Variables 22-23 (6)
  2181. 6 Numeric Constants, Variables,
  2182. and Operations 24-43 (5,6,7)
  2183. 6.1 Standard Capabilities 24-27 (5,6,7)
  2184. 6.2 Exceptions 28-35 (5,7)
  2185. 6.3 Errors 36-38 (7)
  2186. 6.4 Accuracy tests - Informative 39-43 (7)
  2187. 7 FOR-NEXT 44-55 (10,11)
  2188. 7.1 Standard Capabilities 44-49 (10,11)
  2189. 7.2 Errors 50-55 (11)
  2190. 8 Arrays 56-84 (6,7,9,15)
  2191. 8.1 Standard Capabilities 56-62 (6,7,9,15)
  2192. 8.2 Exceptions 63-72 (6,15)
  2193. 8.3 Errors 73-84 (6,15)
  2194. Page 52
  2195. Group Structure of the Minimal BASIC Test Programs (cont.)
  2196. Group Program ANSI
  2197. Number Section
  2198. 9 Control Statements 85-91 (10)
  2199. 9.1 GOSUB and RETURN 85-87 (10)
  2200. 9.2 ON-GOTO 88-91 (10)
  2201. 10 READ, DATA, and RESTORE 92-106 (3,5,14)
  2202. 10.1 Standard Capabilities 92-95 (3,5,14)
  2203. 10.2 Exceptions 96-101 (14)
  2204. 10.3 Errors 102-106 (3,14)
  2205. 11 INPUT 107-113 (3,5,13)
  2206. 11.1 Standard Capabilities 107-110 (3,5,13)
  2207. 11.2 Exceptions 111-112 (3,5,13)
  2208. 11.3 Errors 113 (13)
  2209. 12 Implementation-supplied Functions 114-150 (7,8,17)
  2210. 12.1 Precise functions:
  2211. ABS, INT, SGN 114-116 (8)
  2212. 12.2 Approximated functions:
  2213. SQR, ATN, COS, EXP, LOG,
  2214. SIN, TAN 117-129 (7,8)
  2215. 12.3 RND and RANDOMIZE 130-142 (8,17)
  2216. 12.3.1 Standard Capabilities 130-134 (8,17)
  2217. 12.3.2 Informative tests 135-142 (8)
  2218. 12.4 Errors 143-150 (7,8)
  2219. 13 User-defined Functions 151-163 (7,16)
  2220. 13.1 Standard Capabilities 151-152 (7,16)
  2221. 13.2 Errors 153-163 (7,16)
  2222. Page 53
  2223. Group Structure of the Minimal BASIC Test Programs (cont.)
  2224. Group Program ANSI
  2225. Number Section
  2226. 14 Numeric Expressions 164-184 (6,7,8,10,
  2227. 11,12,16)
  2228. 14.1 Standard Capabilities in context of
  2229. LET—statement 164 (6,7,8,16)
  2230. 14.2 Expressions in other contexts:
  2231. PRINT, IF, ON—GOTO, FOR 165-166 (7,10,11,12)
  2232. 14.3 Exceptions in subscripts and
  2233. arguments 167-171 (6,7,8,16)
  2234. 14.4 Exceptions in other contexts:
  2235. PRINT, IF, ON—GOTO, FOR 172-184 (7,10,11,12)
  2236. 15 Miscellaneous Checks 185-208 (3,4,9,10,12)
  2237. 15.1 Missing keyword 185 (9)
  2238. 15.2 Spaces 186-191 (3,4)
  2239. 15.3 Quotes 192-195 (3,9,12)
  2240. 15.4 Line numbers 196-201 (4)
  2241. 15.5 Line longer than 72 characters 202 (4)
  2242. 15.6 Effect of zones and margin on PRINT 203 (12)
  2243. 15.7 lowercase characters 204-205 (3,9,12)
  2244. 15.8 Ordering relations between strings ...206 (3,10)
  2245. 15.9 Mismatch of types in assignment 207-208 (9)
  2246. Page 54
  2247. 6.2 Test Program Sequence
  2248. PROGRAM NUMBER 1
  2249. NULL PRINT AND PRINTING QUOTED STRINGS.
  2250. REFS: 3.2 3.4 5.2 5.4 12.2 12.4
  2251. PROGRAM NUMBER 2
  2252. THE END-STATEMENT.
  2253. REFS: 4.2 4.4
  2254. PROGRAM NUMBER 3
  2255. ERROR - MISPLACED END-STATEMENT.
  2256. REFS: 4.2 4.4
  2257. PROGRAM NUMBER 4
  2258. ERROR - MISSING END-STATEMENT.
  2259. REFS: 4.2 4.4
  2260. PROGRAM NUMBER 5
  2261. THE STOP-STATEMENT.
  2262. REFS: 10.2 10.4
  2263. PROGRAM NUMBER 6
  2264. PRINT-SEPARATORS, TABS, AND STRING VARIABLES.
  2265. REFS: 6.2 6.4 9.2 9.4 12.2 12.4
  2266. PROGRAM NUMBER 7
  2267. EXCEPTION - STRING OVERFLOW USING THE LET-STATEMENT.
  2268. REFS: 9.5 12.4
  2269. PROGRAM NUMBER 8
  2270. EXCEPTION - TAB ARGUMENT LESS THAN ONE.
  2271. REFS: 12.5
  2272. PROGRAM NUMBER 9
  2273. PRINTING NR1 AND NR2 NUMERIC CONSTANTS.
  2274. REFS: 5.2 5.4 12.4
  2275. PROGRAM NUMBER 10
  2276. PRINTING NR3 NUMERIC CONSTANTS.
  2277. REFS: 5.2 5.4 12.4
  2278. PROGRAM NUMBER 11
  2279. PRINTING NUMERIC VARIABLES ASSIGNED NR1 AND NR2 CONSTANTS.
  2280. REFS: 5.2 5.4 6.2 6.4 9.2 9.4 12.4
  2281. PROGRAM NUMBER 12
  2282. PRINTING NUMERIC VARIABLES ASSIGNED NR3 CONSTANTS.
  2283. REFS: 5.2 5.4 6.2 6.4 9.2 9.4 12.4
  2284. PROGRAM NUMBER 13
  2285. FORMAT AND ROUNDING OF PRINTED NUMERIC CONSTANTS.
  2286. REFS: 12.4 5.2 5.4
  2287. Page 55
  2288. PROGRAM NUMBER 14
  2289. PRINTING AND ASSIGNING NUMERIC VALUES NEAR TO THE MAXIMUM AND
  2290. MINIMUM MAGNITUDE.
  2291. REFS: 5.4 9.4 12.4
  2292. PROGRAM NUMBER 15
  2293. THE REM AND GOTO STATEMENTS.
  2294. REFS: 18.2 18.4 10.2 10.4
  2295. PROGRAM NUMBER 16
  2296. ERROR - TRANSFER TO A NON-EXISTING LINE NUMBER USING THE
  2297. GOTO-STATEMENT.
  2298. REFS: 10.4
  2299. PROGRAM NUMBER 17
  2300. ELEMENTARY USE OF GOSUB AND RETURN.
  2301. REFS: 10.2 10.4
  2302. PROGRAM NUMBER 18
  2303. THE IF-THEN STATEMENT WITH STRING OPERANDS.
  2304. REFS: 10.2 10.4
  2305. PROGRAM NUMBER 19
  2306. THE IF-THEN STATEMENT WITH NUMERIC OPERANDS
  2307. REFS: 10.2 10.4
  2308. PROGRAM NUMBER 20
  2309. ERROR - IF-THEN STATEMENT WITH A STRING AND NUMERIC OPERAND.
  2310. REFS: 10.2
  2311. PROGRAM NUMBER 21
  2312. ERROR - TRANSFER TO NON-EXISTING LINE NUMBER USING THE
  2313. IF-THEN-STATEMENT.
  2314. REFS: 10.4
  2315. PROGRAM NUMBER 22
  2316. NUMERIC AND STRING VARIABLE NAMES WITH THE SAME INITIAL
  2317. LETTER.
  2318. REFS: 6.2 6.4
  2319. PROGRAM NUMBER 23
  2320. INITIALIZATION OF STRING AND NUMERIC VARIABLES.
  2321. REFS: 6.6
  2322. PROGRAM NUMBER 24
  2323. PLUS AND MINUS
  2324. REFS: 7.2 7.4
  2325. PROGRAM NUMBER 25
  2326. MULTIPLY, DIVIDE, AND INVOLUTE
  2327. REFS: 7.2 7.4
  2328. PROGRAM NUMBER 26
  2329. PRECEDENCE RULES FOR NUMERIC EXPRESSIONS.
  2330. REFS: 7.2 7.4
  2331. Page 56
  2332. PROGRAM NUMBER 27
  2333. ACCURACY OF CONSTANTS AND VARIABLES.
  2334. REFS: 5.2 5.4 6.2 6.4 10.4
  2335. PROGRAM NUMBER 28
  2336. EXCEPTION - DIVISION BY ZERO.
  2337. REFS: 7.5
  2338. PROGRAM NUMBER 29
  2339. EXCEPTION - OVERFLOW OF NUMERIC EXPRESSIONS.
  2340. REFS: 7.5
  2341. PROGRAM NUMBER 30
  2342. EXCEPTION - OVERFLOW OF NUMERIC CONSTANTS.
  2343. REFS: 5.4 5.5
  2344. PROGRAM NUMBER 31
  2345. EXCEPTION - ZERO RAISED TO A NEGATIVE POWER.
  2346. REFS: 7.5
  2347. PROGRAM NUMBER 32
  2348. EXCEPTION - NEGATIVE QUANTITY RAISED TO A NON-INTEGRAL POWER.
  2349. REFS: 7.5
  2350. PROGRAM NUMBER 33
  2351. EXCEPTION - UNDERFLOW OF NUMERIC EXPRESSIONS.
  2352. REFS: 7.4
  2353. PROGRAM NUMBER 34
  2354. EXCEPTION - UNDERFLOW OF NUMERIC CONSTANTS.
  2355. REFS: 5.4 5.6
  2356. PROGRAM NUMBER 35
  2357. EXCEPTION - OVERFLOW AND UNDERFLOW WITHIN SUB-EXPRESSIONS
  2358. REFS: 7.4 7.5
  2359. PROGRAM NUMBER 36
  2360. ERROR - UNMATCHED PARENTHESES IN NUMERIC EXPRESSION.
  2361. REFS: 7.2
  2362. PROGRAM NUMBER 37
  2363. ERROR - USE OF '*,' AS OPERATOR.
  2364. REFS: 7.2
  2365. PROGRAM NUMBER 38
  2366. ERROR - USE OF ADJACENT OPERATORS.
  2367. REFS: 7.2
  2368. PROGRAM NUMBER 39
  2369. ACCURACY OF ADDITION
  2370. REFS: 7.2 7.4 7.6
  2371. PROGRAM NUMBER 40
  2372. ACCURACY OF SUBTRACTION
  2373. REFS: 7.2 7.4 7.6
  2374. Page 57
  2375. PROGRAM NUMBER 41
  2376. ACCURACY OF MULTIPLICATION
  2377. REFS: 7.2 7.4 7.6
  2378. PROGRAM NUMBER 42
  2379. ACCURACY OF DIVISION
  2380. REFS: 7.2 7.4 7.6
  2381. PROGRAM NUMBER 43
  2382. ACCURACY OF INVOLUTION
  2383. REFS: 7.2 7.4 7.6
  2384. PROGRAM NUMBER 44
  2385. ELEMENTARY USE OF THE FOR-STATEMENT.
  2386. REFS: 11.2 11.4
  2387. PROGRAM NUMBER 45
  2388. ALTERING THE CONTROL-VARIABLE WITHIN A FOR-BLOCK.
  2389. REFS: 11.2 11.4
  2390. PROGRAM NUMBER 46
  2391. INTERACTION OF CONTROL STATEMENTS WITH THE FOR-STATEMENT.
  2392. REFS: 11.2 11.4 10.2 10.4
  2393. PROGRAM NUMBER 47
  2394. INCREMENT IN THE STEP CLAUSE OF THE FOR-STATEMENT DEFAULTS TO
  2395. A VALUE OF ONE.
  2396. REFS: 11.2 11.4
  2397. PROGRAM NUMBER 48
  2398. LIMIT AND INCREMENT IN THE FOR-STATEMENT ARE EVALUATED ONCE
  2399. UPON ENTERING THE LOOP.
  2400. REFS: 11.2 11.4
  2401. PROGRAM NUMBER 49
  2402. NESTED FOR-BLOCKS.
  2403. REFS: 11.2 11.4
  2404. PROGRAM NUMBER 50
  2405. ERROR - FOR-STATEMENT WITHOUT A MATCHING NEXT-STATEMENT.
  2406. REFS: 11.2 11.4
  2407. PROGRAM NUMBER 51
  2408. ERROR - NEXT-STATEMENT WITHOUT A MATCHING FOR-STATEMENT.
  2409. REFS: 11.2 11.4
  2410. PROGRAM NUMBER 52
  2411. ERROR - MISMATCHED CONTROL-VARIABLES ON FOR-STATEMENT AND
  2412. NEXT-STATEMENT.
  2413. REFS: 11.4
  2414. PROGRAM NUMBER 53
  2415. ERROR - INTERLEAVED FOR-BLOCKS.
  2416. REFS: 11.4
  2417. Page 58
  2418. PROGRAM NUMBER 54
  2419. ERROR - NESTED FOR-BLOCKS WITH THE SAME CONTROL VARIABLE.
  2420. REFS: 11.4
  2421. PROGRAM NUMBER 55
  2422. ERROR - JUMP INTO FOR-BLOCK.
  2423. REFS: 11.4
  2424. PROGRAM NUMBER 56
  2425. ARRAY ASSIGNMENT WITHOUT THE OPTION-STATEMENT.
  2426. REFS: 6.2 6.4 9.2 9.4 15.2 15.4
  2427. PROGRAM NUMBER 57
  2428. ARRAY ASSIGNMENT WITH OPTION BASE O.
  2429. REFS: 6.2 6.4 9.2 9.4 15.2 15.4
  2430. PROGRAM NUMBER 58
  2431. ARRAY ASSIGNMENT WITH OPTION BASE 1.
  2432. REFS: 6.2 6.4 9.2 9.4 15.2 15.4
  2433. PROGRAM NUMBER 59
  2434. ARRAY NAMED 'A' IS DISTINCT FROM 'A$'.
  2435. REFS: 6.2 6.4
  2436. PROGRAM NUMBER 60
  2437. NUMERIC CONSTANTS USED AS SUBSCRIPTS ARE ROUNDED TO NEAREST
  2438. INTEGER.
  2439. REFS: 6.4 5.4
  2440. PROGRAM NUMBER 61
  2441. NUMERIC EXPRESSIONS CONTAINIV. SUBSCRIPTED VARIABLES.
  2442. REFS: 6.2 6.4 7.2 7.4
  2443. PROGRAM NUMBER 62
  2444. GENERAL SYNTACTIC AND SEMANTIC PROPERTIES OF ARRAY CONTROL
  2445. STATEMENTS: OPTION AND DIM.
  2446. REFS: 15.2 15.4
  2447. PROGRAM NUMBER 63
  2448. EXCEPTION - SUBSCRIPT TOO LARGE FOR ONE-DIMENSIONAL ARRAY.
  2449. REFS: 6.5
  2450. PROGRAM NUMBER 64
  2451. EXCEPTION - SUBSCRIPT TOO SMALL FOR TWO-DIMENSIONAL ARRAY.
  2452. REFS: 6.5
  2453. PROGRAM NUMBER 65
  2454. EXCEPTION - SUBSCRIPT TOO SMALL FOR ONE-DIMENSIONAL ARRAY,
  2455. WITH DIM.
  2456. REFS: 6.5 15.2 15.4
  2457. PROGRAM NUMBER 66
  2458. EXCEPTION - SUBSCRIPT TOO LARGE FOR TWO-DIMENSIONAL ARRAY,
  2459. WITH DIM.
  2460. REFS: 6.5 15.2 15.4
  2461. Page 59
  2462. PROGRAM NUMBER 67
  2463. EXCEPTION - SUBSCRIPT TOO SMALL FOR ONE-DIMENSIONAL ARRAY,
  2464. WITH OPTION BASE 1.
  2465. REFS: 6.5 15.2 15.4
  2466. PROGRAM NUMBER 68
  2467. EXCEPTION - SUBSCRIPT TOO LARGE FOR ONE-DIMENSIONAL ARRAY,
  2468. WITH DIM AND OPTION BASE 1.
  2469. REFS: 6.5 15.2 15.4
  2470. PROGRAM NUMBER 69
  2471. EXCEPTION - SUBSCRIPT TOO LARGE FOR TWO-DIMENSIONAL ARRAY,
  2472. WITH DIM AND OPTION BASE O.
  2473. REFS: 6.5 15.2 15.4
  2474. PROGRAM NUMBER 70
  2475. EXCEPTION - SUBSCRIPT TOO SMALL FOR ONE-DIMENSIONAL ARRAY,
  2476. WITH OPTION BASE O.
  2477. REFS: 6.5 15.2 15.4
  2478. PROGRAM NUMBER 71
  2479. EXCEPTION - SUBSCRIPT TOO SMALL FOR TWO-DIMENSIONAL ARRAY,
  2480. WITH DIM AND OPTION BASE O.
  2481. REFS: 6.5 15.2 15.4
  2482. PROGRAM NUMBER 72
  2483. EXCEPTION - SUBSCRIPT TOO SMALL FOR TWO-DIMENSIONAL ARRAY,
  2484. WITH DIM AND OPTION BASE 1.
  2485. REFS: 6.5 15.2 15.4
  2486. PROGRAM NUMBER 73
  2487. ERROR - DIM SETS UPPER BOUND OF ZERO WITH OPTION BASE 1.
  2488. REFS: 15.4
  2489. PROGRAM NUMBER 74
  2490. ERROR - DIM SETS ARRAY TO ONE DIMENSION AND REFERENCE IS MADE
  2491. TO TWO-DIMENSIONAL VARIABLE OF SAME NAME.
  2492. REFS: 15.4 6.4
  2493. PROGRAM NUMBER 75
  2494. ERROR - DIM SETS ARRAY TO ONE DIMENSION AND REFERENCE IS MADE
  2495. TO SIMPLE VARIABLE OF SAME NAME.
  2496. REFS: 15.4 6.4
  2497. PROGRAM NUMBER 76
  2498. ERROR - DIM SETS ARRAY TO TWO DIMENSIONS AND REFERENCE IS MADE
  2499. TO ONE-DIMENSIONAL VARIABLE OF SAME NAME.
  2500. REFS: 15.4 6.4
  2501. PROGRAM NUMBER 77
  2502. ERROR - REFERENCE TO ARRAY AND SIMPLE VARIABLE OF SAME NAME.
  2503. REFS: 6.4
  2504. Page 60
  2505. PROGRAM NUMBER 78
  2506. ERROR - REFERENCE TO ONE-DIMENSIONAL AND TWO-DIMENSIONAL
  2507. VARIABLE OF SAME NAME.
  2508. REFS: 6.4
  2509. PROGRAM NUMBER 79
  2510. ERROR - REFERENCE TO ARRAY WITH LETTER-DIGIT NAME.
  2511. REFS: 6.2
  2512. PROGRAM NUMBER 80
  2513. ERROR - MULTIPLE OPTION STATEMENTS.
  2514. REFS: 15.4
  2515. PROGRAM NUMBER 81
  2516. ERROR - DIM-STATEMENT PRECEDES OPTION-STATEMENT.
  2517. REFS: 15.4
  2518. PROGRAM NUMBER 82
  2519. ERROR - ARRAY-REFERENCE PRECEDES OPTION-STATEMENT.
  2520. REFS: 15.4
  2521. PROGRAM NUMBER 83
  2522. ERROR - ARRAY-REFERENCE PRECEDES DIM-STATEMENT.
  2523. REFS: 15.4
  2524. PROGRAM NUMBER 84
  2525. ERROR - DIMENSIONING THE SAME ARRAY MORE THAN ONCE.
  2526. REFS: 15.4
  2527. PROGRAM NUMBER 85
  2528. GENERAL CAPABILITIES OF GOSUB/RETURN.
  2529. REFS: 10.4
  2530. PROGRAM NUMBER 86
  2531. EXCEPTION - RETURN WITHOUT GOSUB.
  2532. REFS: 10.5
  2533. PROGRAM NUMBER 87
  2534. ERROR - TRANSFER TO NON-EXISTING LINE NUMBER USING THE
  2535. GOSUB-STATEMENT.
  2536. REFS: 10.4
  2537. PROGRAM NUMBER 88
  2538. THE ON-GOTO-STATEMENT.
  2539. REFS: 10.2 10.4
  2540. PROGRAM NUMBER 89
  2541. EXCEPTION - ON-GOTO CONTROL EXPRESSION LESS THAN 1.
  2542. REFS: 10.5
  2543. PROGRAM NUMBER 90
  2544. EXCEPTION - ON-GOTO CONTROL EXPRESSION GREATER THAN NUMBER OF
  2545. LINE-NUMBERS IN LIST.
  2546. REFS: 10.5
  2547. Page 61
  2548. PROGRAM NUMBER 91
  2549. ERROR - TRANSFER TO NON-EXISTING LINE NUMBER USING THE
  2550. ON-GOTO-STATEMENT.
  2551. REFS: 10.4
  2552. PROGRAM NUMBER 92
  2553. READ AND DATA STATEMENTS FOR NUMERIC DATA.
  2554. REFS: 5.2 14.2 14.4
  2555. PROGRAM NUMBER 93
  2556. READ AND DATA STATEMENTS FOR STRING DATA.
  2557. REFS: 3.2 5.2 14.2 14.4
  2558. PROGRAM NUMBER 91i
  2559. READING DATA INTO SUBSCRIPTED VARIABLES.
  2560. REFS: 14.2 14.4
  2561. PROGRAM NUMBER 95
  2562. GENERAL USE OF THE READ, DATA, AND RESTORE STATEMENTS.
  2563. REFS: 14.2 14.4
  2564. PROGRAM NUMBER 96
  2565. EXCEPTION - NUMERIC UNDERFLOW WHEN READING DATA CAUSES
  2566. REPLACEMENT BY ZERO.
  2567. REFS: 5.5 14.4
  2568. PROGRAM NUMBER 97
  2569. EXCEPTION - INSUFFICIENT DATA FOR READ.
  2570. REFS: 14.5
  2571. PROGRAM NUMBER 98
  2572. EXCEPTION - READING UNQUOTED STRING DATA INTO A NUMERIC
  2573. VARIABLE.
  2574. REFS: 14.5
  2575. PROGRAM NUMBER 99
  2576. EXCEPTION - READING QUOTED STRING DATA INTO A NUMERIC
  2577. VARIABLE.
  2578. REFS: 14.5
  2579. PROGRAM NUMBER 100
  2580. EXCEPTION - STRING OVERFLOW ON READ.
  2581. REFS: 14.5
  2582. PROGRAM NUMBER 101
  2583. EXCEPTION - NUMERIC OVERFLOW ON READ.
  2584. REFS: 14.5
  2585. PROGRAM NUMBER 102
  2586. ERROR - ILLEGAL CHARACTER IN UNQUOTED STRING IN DATA
  2587. STATEMENT.
  2588. REFS: 3.2 14.2
  2589. Page 62
  2590. PROGRAM NUMBER 103
  2591. ERROR - READING QUOTED STRINGS CONTAINING SINGLE QUOTE.
  2592. REFS: 3.2 14.2
  2593. PROGRAM NUMBER 104
  2594. ERROR - READING QUOTED STRINGS CONTAINING DOUBLE QUOTE.
  2595. REFS: 3.2 14.2
  2596. PROGRAM NUMBER 105
  2597. ERROR - NULL DATUM IN DATA-LIST.
  2598. REFS: 14.2
  2599. PROGRAM NUMBER 106
  2600. ERROR - NULL ENTRY IN READ'S VARIABLE-LIST.
  2601. REFS: 14.2
  2602. PROGRAM NUMBER 107
  2603. INPUT OF NUMERIC CONSTANTS.
  2604. REFS: 5.2 13.2 13.4
  2605. PROGRAM NUMBER 108
  2606. INPUT TO SUBSCRIPTED VARIABLES.
  2607. REFS: 13.2 13.4
  2608. PROGRAM NUMBER 109
  2609. STRING INPUT.
  2610. REFS: 3.2 13.2 13.4
  2611. PROGRAM NUMBER 110
  2612. MIXED INPUT OF STRINGS AND NUMBERS.
  2613. REFS: 13.2 13.4
  2614. PROGRAM NUMBER 111
  2615. EXCEPTION - NUMERIC UNDERFLOW ON INPUT CAUSES REPLACEMENT BY
  2616. ZERO.
  2617. REFS: 5.6 13.4
  2618. PROGRAM NUMBER 112
  2619. EXCEPTION - INPUT-REPLY INCONSISTENT WITH INPUT VARIABLE-LIST.
  2620. REFS: 13.4 13.5 3.2 5.2
  2621. PROGRAM NUMBER 113
  2622. ERROR - NULL ENTRY IN INPUT-LIST.
  2623. REFS: 13.2
  2624. PROGRAM NUMBER 114
  2625. EVALUATION OF ABS FUNCTION.
  2626. REFS: 8.4
  2627. PROGRAM NUMBER 115
  2628. EVALUATION OF INT FUNCTION.
  2629. REFS: 8.4
  2630. Page 63
  2631. PROGRAM NUMBER 116
  2632. EVALUATION OF SGN FUNCTION.
  2633. REFS: 8.4
  2634. PROGRAM NUMBER 117
  2635. ACCURACY OF SQR FUNCTION.
  2636. REFS: 7.6 8.4
  2637. PROGRAM NUMBER 118
  2638. EXCEPTION - SQR OF NEGATIVE ARGUMENT.
  2639. REFS: 8.5
  2640. PROGRAM NUMBER 119
  2641. ACCURACY OF ATN FUNCTION.
  2642. REFS: 7.6 8.4
  2643. PROGRAM NUMBER 120
  2644. ACCURACY OF COS FUNCTION.
  2645. REFS: 7.6 8.4
  2646. PROGRAM NUMBER 121
  2647. ACCURACY OF EXP FUNCTION.
  2648. REFS: 7.6 8.4
  2649. PROGRAM NUMBER 122
  2650. EXCEPTION - OVERFLOW ON VALUE OF EXP FUNCTION.
  2651. REFS: 8.5
  2652. PROGRAM NUMBER 123
  2653. EXCEPTION - UNDERFLOW ON VALUE OF EXP FUNCTION.
  2654. REFS: 8.4 8.6
  2655. PROGRAM NUMBER 124
  2656. ACCURACY OF LOG FUNCTION.
  2657. REFS: 7.6 8.4
  2658. PROGRAM NUMBER 125
  2659. EXCEPTION - LOG OF ZERO ARGUMENT.
  2660. REFS: 8.5
  2661. PROGRAM NUMBER 126
  2662. EXCEPTION - LOG OF NEGATIVE ARGUMENT.
  2663. REFS: 8.5
  2664. PROGRAM NUMBER 127
  2665. ACCURACY OF SIN FUNCTION.
  2666. REFS: 7.6 8.4
  2667. PROGRAM NUMBER 128
  2668. ACCURACY OF TAN FUNCTION.
  2669. REFS: 7.6 8.4
  2670. PROGRAM NUMBER 129
  2671. EXCEPTION - OVERFLOW ON VALUE OF TAN FUNCTION.
  2672. REFS: 8.5
  2673. Page 64
  2674. PROGRAM NUMBER 130
  2675. RND FUNCTION WITHOUT RANDOMIZE STATEMENT.
  2676. REFS: 8.2 8.4
  2677. PROGRAM NUMBER 131
  2678. RND FUNCTION WITH THE RANDOMIZE STATEMENT.
  2679. REFS: 8.2 8.4 17.2 17.4
  2680. PROGRAM NUMBER 132
  2681. AVERAGE OF RANDOM NUMBERS APPROXIMATES 0.5 AND 0 <= RND < 1.
  2682. REFS: 8.4
  2683. PROGRAM NUMBER 133
  2684. CHI-SQUARE UNIFORMITY TEST FOR RND FUNCTION.
  2685. REFS: 8.4
  2686. PROGRAM NUMBER 134
  2687. KOMOLGOROV-SMIRNOV UNIFORMITY TEST FOR RND FUNCTION.
  2688. REFS: 8.4
  2689. PROGRAM NUMBER 135
  2690. SERIAL TEST FOR RANDOMNESS.
  2691. REFS: 8.4
  2692. PROGRAM NUMBER 136
  2693. GAP TEST FOR RND FUNCTION.
  2694. REFS: 8.4
  2695. PROGRAM NUMBER 137
  2696. POKER TEST FOR RND FUNCTION.
  2697. REFS: 8.4
  2698. PROGRAM NUMBER 138
  2699. COUPON COLLECTOR TEST OF RND FUNCTION.
  2700. REFS: 8.4
  2701. PROGRAM NUMBER 139
  2702. PERMUTATION TEST FOR THE RND FUNCTION.
  2703. REFS: 8.4
  2704. PROGRAM NUMBER 140
  2705. RUNS TEST FOR THE RND FUNCTION.
  2706. REFS: 8.4
  2707. PROGRAM NUMBER 141
  2708. MAXIMUM OF GROUP TEST OF RND FUNCTION.
  2709. REFS: 8.4
  2710. PROGRAM NUMBER 142
  2711. SERIAL CORRELATION TEST OF RND FUNCTION.
  2712. REFS: 8.4
  2713. PROGRAM NUMBER 143
  2714. ERROR - TWO ARGUMENTS IN LIST FOR SIN FUNCTION.
  2715. REFS: 7.2 7.4 8.2 8.4
  2716. Page 65
  2717. PROGRAM NUMBER 144
  2718. ERROR - TWO ARGUMENTS IN LIST FOR ATM FUNCTION;
  2719. REFS: 7.2 7.4 8.2 8.A
  2720. PROGRAM NUMBER 145
  2721. ERROR - TWO ARGUMENTS IN LIST FOR RND FUNCTION;
  2722. REFS: 7.2 7.4 8.2 8.4
  2723. PROGRAM NUMBER 146
  2724. ERROR - ONE ARGUMENT IN LIST FOR RND FUNCTION;
  2725. REFS: 7.2 7.4 8.2 8.4
  2726. PROGRAM NUMBER 147
  2727. ERROR - NULL ARGUMENT-LIST FOR INT FUNCTION;
  2728. REFS: 7.2 7.4 8.2 8.4
  2729. PROGRAM NUMBER 148
  2730. ERROR - MISSING ARGUMENT LIST FOR TAN FUNCTION;
  2731. REFS: 7.2 7.4 8.2 8.'4
  2732. PROGRAM NUMBER 149
  2733. ERROR - NULL ARGUMENT-LIST FOR RND FUNCTION.'
  2734. REFS: 7.2 7.* 8.2 8.A
  2735. PROGRAM NUMBER 150
  2736. ERROR - USING A STRING AS AN ARGUMENT FOR AN
  2737. IMPLEMENTATION-SUPPLIED FUNCTION.'
  2738. REFS: 7.2 7.4 8.2 8.A
  2739. PROGRAM NUMBER 151
  2740. USER-DEFINED FUNCTIONS;
  2741. REFS: 16.2 16.4 7.2 7.'4
  2742. PROGRAM NUMBER 152
  2743. VALID NAMES FOR USER-DEFINED FUNCTIONS.
  2744. REFS: 16.2
  2745. PROGRAM NUMBER 153
  2746. ERROR - SUPERFLUOUS ARGUMENT-LIST FOR USER-DEFINED FUNCTION.
  2747. REFS: 16.4
  2748. PROGRAM NUMBER 154
  2749. ERROR - MISSING ARGUMENT-LIST FOR USER-DEFINED FUNCTION.'
  2750. REFS: 16.4
  2751. PROGRAM NUMBER 155
  2752. ERROR - NULL ARGUMENT-LIST FOR USER-DEFINED FUNCTION.'
  2753. REFS: 7.2 7.4 16.2 16.4
  2754. PROGRAM NUMBER 156
  2755. ERROR - EXCESS ARGUMENT IN LIST FOR USER-DEFINED FUNCTION.
  2756. REFS: 16.4
  2757. Page 66
  2758. PROGRAM NUMBER 157
  2759. ERROR - USER-DEFINED FUNCTION WITH TWO PARAMETERS.
  2760. REFS: 16.2 16.4 7.2 7.4
  2761. PROGRAM NUMBER 158
  2762. ERROR - USING A STRING AS AN ARGUMENT FOR A USER-DEFINED
  2763. FUNCTION.
  2764. REFS: 7.2 7.4 16.2 16.4
  2765. PROGRAM NUMBER 159
  2766. ERROR - USING A STRING AS AN ARGUMENT AND PARAMETER FOR A
  2767. USER-DEFINED FUNCTION.
  2768. REFS: 7.2 7.4 16.2 16.4
  2769. PROGRAM NUMBER 160
  2770. ERROR - FUNCTION DEFINED MORE THAN ONCE.
  2771. REFS: 16.4
  2772. PROGRAM NUMBER 161
  2773. ERROR - REFERENCING A FUNCTION INSIDE ITS OWN DEFINITION.
  2774. REFS: 16.4
  2775. PROGRAM NUMBER 162
  2776. ERROR - REFERENCE TO FUNCTION PRECEDES ITS DEFINITION.
  2777. REFS: 16.4
  2778. PROGRAM NUMBER 163
  2779. ERROR - REFERENCE TO AN UNDEFINED FUNCTION.
  2780. REFS: 16.4
  2781. PROGRAM NUMBER 164
  2782. GENERAL USE OF NUMERIC EXPRESSIONS IN LET-STATEMENT.
  2783. REFS: 6.2 6.4 7.2 7.4 8.2 8.4 16.2 16.4
  2784. PROGRAM NUMBER 165
  2785. COMPOUND EXPRESSIONS AND PRINT.
  2786. REFS: 7.2 7.4 12.2 12.4
  2787. PROGRAM NUMBER 166
  2788. COMPOUND EXPRESSIONS USED WITH CONTROL STATEMENTS AND
  2789. FOR-STATEMENTS.
  2790. REFS: 7.2 7.4 10.2 10.4 11.2 11.4
  2791. PROGRAM NUMBER 167
  2792. EXCEPTION - EVALUATION OF NUMERIC EXPRESSIONS ACTING AS
  2793. FUNCTION ARGUMENTS.
  2794. REFS: 7.5 8.4 16.4
  2795. PROGRAM NUMBER 168
  2796. EXCEPTION - OVERFLOW IN THE SUBSCRIPT OF AN ARRAY.
  2797. REFS: 6.4 6.5 7.5
  2798. Page 67
  2799. PROGRAM NUMBER 169
  2800. EXCEPTION - NUMERIC UNDERFLOW IN THE EVALUATION OF NUMERIC
  2801. EXPRESSIONS ACTING AS ARGUMENTS AND SUBSCRIPTS.
  2802. REFS: 6.4 7.4 7.6 8.4
  2803. PROGRAM NUMBER 170
  2804. EXCEPTION - NEGATIVE QUANTITY RAISED TO A NON-INTEGRAL POWER
  2805. IN A SUBSCRIPT.
  2806. REFS: 7.5 6.2
  2807. PROGRAM NUMBER 171
  2808. EXCEPTION - LOG OF A NEGATIVE QUANTITY IN AN ARGUMENT.
  2809. REFS: 8.5 16.2
  2810. PROGRAM NUMBER 172
  2811. EXCEPTION - SQR OF NEGATIVE QUANTITY IN PRINT-ITEM.
  2812. REFS: 8.5 12.2
  2813. PROGRAM NUMBER 173
  2814. EXCEPTION - NEGATIVE QUANTITY RAISED TO A NON-INTEGRAL POWER
  2815. IN TAB-ITEM.
  2816. REFS: 7.5 12.2
  2817. PROGRAM NUMBER 174
  2818. EXCEPTION - EVALUATION OF NUMERIC EXPRESSIONS IN THE PRINT
  2819. STATEMENT.
  2820. REFS: 7.5 8.5 12.2
  2821. PROGRAM NUMBER 175
  2822. EXCEPTION - UNDERFLOW IN THE EVALUATION OF NUMERIC EXPRESSIONS
  2823. IN THE PRINT STATEMENT.
  2824. REFS: 7.4 7.6 8.6 12.2
  2825. PROGRAM NUMBER 176
  2826. EXCEPTION - NEGATIVE QUANTITY RAISED TO A NON-INTEGRAL POWER
  2827. IN IF-STATEMENT.
  2828. REFS: 7.5 10.2
  2829. PROGRAM NUMBER 177
  2830. EXCEPTION - EVALUATION OF NUMERIC EXPRESSIONS IN THE
  2831. IF-STATEMENT.
  2832. REFS: 7.5 10.2
  2833. PROGRAM NUMBER 178
  2834. EXCEPTION - UNDERFLOW IN THE EVALUATION OF NUMERIC
  2835. EXPRESSIONS IN THE IF-STATEMENT.
  2836. REFS: 7.4 7.6 10.2
  2837. PROGRAM NUMBER 179
  2838. EXCEPTION - LOG OF ZERO IN ON-GOTO-STATEMENT.
  2839. REFS: 8.5 10.2
  2840. Page 68
  2841. PROGRAM NUMBER 180
  2842. EXCEPTION - EVALUATION OF NUMERIC EXPRESSIONS IN THE ON-GOTO
  2843. STATEMENT.
  2844. REFS: 7.5 10.2 10.5
  2845. PROGRAM NUMBER 181
  2846. EXCEPTION - UNDERFLOW IN THE EVALUATION OF THE EXP FUNCTION IN
  2847. THE ON-GOTO STATEMENT.
  2848. REFS: 7.4 8.6 10.2 10.5
  2849. PROGRAM NUMBER 182
  2850. EXCEPTION - NEGATIVE QUANTITY RAISED TO A NON-INTEGRAL POWER
  2851. IN FOR-STATEMENT.
  2852. REFS: 7.5 11.2
  2853. PROGRAM NUMBER 183
  2854. EXCEPTION - EVALUATION OF NUMERIC EXPRESSIONS IN THE
  2855. FOR-STATEMENT.
  2856. REFS: 7.5 11.2
  2857. PROGRAM NUMBER 184
  2858. EXCEPTION - UNDERFLOW IN THE EVALUATION OF NUMERIC EXPRESSIONS
  2859. IN THE FOR-STATEMENT.
  2860. REFS: 7.4 7.6 11.2
  2861. PROGRAM NUMBER 185
  2862. ERROR - MISSING KEYWORD LET.
  2863. REFS: 9.2 9.4
  2864. PROGRAM NUMBER 186
  2865. EXTRA SPACES HAVE NO EFFECT.
  2866. REFS: 3.4
  2867. PROGRAM NUMBER 187
  2868. ERROR - SPACES AT THE BEGINNING OF A LINE.
  2869. REFS: 3.4 4.4
  2870. PROGRAM NUMBER 188
  2871. ERROR - SPACES WITHIN LINE-NUMBERS.
  2872. REFS: 3.4 4.4
  2873. PROGRAM NUMBER 189
  2874. ERROR - SPACES WITHIN KEYWORDS.
  2875. REFS: 3.4
  2876. PROGRAM NUMBER 190
  2877. ERROR - NO SPACES BEFORE KEYWORDS.
  2878. REFS: 3.4
  2879. PROGRAM NUMBER 191
  2880. ERROR - NO SPACES AFTER KEYWORDS.
  2881. REFS: 3.4
  2882. Page 69
  2883. PROGRAM NUMBER 192
  2884. ERROR - PRINT-ITEM QUOTED STRINGS CONTAINING SINGLE QUOTE.
  2885. REFS: 3.2 12.2 12.4
  2886. PROGRAM NUMBER 193
  2887. ERROR - PRINT-ITEM QUOTED STRINGS CONTAINING DOUBLE QUOTES.
  2888. REFS: 3.2 12.2 12.4
  2889. PROGRAM NUMBER 194
  2890. ERROR - ASSIGNED QUOTED STRINGS CONTAINING SINGLE QUOTE.
  2891. REFS: 3.2 9.2
  2892. PROGRAM NUMBER 195
  2893. ERROR - ASSIGNED QUOTED STRING CONTAINING DOUBLE QUOTES.
  2894. REFS: 3.2 9.2
  2895. PROGRAM NUMBER 196
  2896. LINE-NUMBERS WITH LEADING ZEROS.
  2897. REFS: 4.2 4.4
  2898. PROGRAM NUMBER 197
  2899. ERROR - DUPLICATE LINE-NUMBERS.
  2900. REFS: 4.4
  2901. PROGRAM NUMBER 198
  2902. ERROR - LINES OUT OF ORDER.
  2903. REFS: 4.4
  2904. PROGRAM NUMBER 199
  2905. ERROR - FIVE-DIGIT LINE-NUMBERS.
  2906. REFS: 4.2
  2907. PROGRAM NUMBER 200
  2908. ERROR - LINE-NUMBER ZERO.
  2909. REFS: 4.4
  2910. PROGRAM NUMBER 201
  2911. ERROR - STATEMENTS WITHOUT LINE-NUMBERS.
  2912. REFS: 4.2 4.4
  2913. PROGRAM NUMBER 202
  2914. ERROR - LINES LONGER THAN 72 CHARACTERS.
  2915. REFS: 4.4
  2916. PROGRAM NUMBER 203
  2917. EFFECT OF ZONES AND MARGIN ON PRINT.
  2918. REFS: 12.4 12.2
  2919. PROGRAM NUMBER 204
  2920. ERROR - PRINT-STATEMENTS CONTAINING LOWERCASE CHARACTERS.
  2921. REFS: 3.2 3.4 12.2
  2922. PROGRAM NUMBER 205
  2923. ERROR - ASSIGNED STRING CONTAINING LOWERCASE CHARACTERS.
  2924. REFS: 3.2 3.4 9.2
  2925. Page 70
  2926. PROGRAM NUMBER 206
  2927. ERROR - ORDERING RELATIONS BETWEEN STRINGS.
  2928. REFS: 3.2 3.4 3.6 10.2
  2929. PROGRAM NUMBER 207
  2930. ERROR - ASSIGNMENT OF A STRING TO A NUMERIC VARIABLE.
  2931. REFS: 9.2
  2932. PROGRAM NUMBER 208
  2933. ERROR - ASSIGNMENT OF A NUMBER TO A STRING VARIABLE.
  2934. REFS: 9.2
  2935. Page 71
  2936. 6.3 Cross-reference Between ANSI Standard And Test Programs
  2937. Section 3: Characters and Strings
  2938. 3.2: Syntax
  2939. 1 93 102 103 104 109 112 192 193 194 195 204 205 206
  2940. 3.4: Semantics
  2941. 1 186 187 188 189 190 191 204 205 206
  2942. 3.6: Remarks
  2943. 206
  2944. Section 4: Programs
  2945. 4.2: Syntax
  2946. 2 3 4 196 199 201
  2947. 4.4: Semantics
  2948. 2 3 4 187 188 196 197 198 200 201 202
  2949. Section 5: Constants
  2950. 5.2: Syntax
  2951. 1 9 10 11 12 13 27 92 93 107 112
  2952. 5.4: Semantics
  2953. 1 9 10 11 12 13 14 27 30 34 60
  2954. 5.5: Exceptions
  2955. 30
  2956. 5.6: Remarks
  2957. 34 96 111
  2958. Section 6: Variables
  2959. 6.2: Syntax
  2960. 6 11 12 22 27 56 57 58 59 61 79 164 170
  2961. 6.4: Semantics
  2962. 6 11 12 22 27 56 57 58 59 60 61 74 75 76 77
  2963. 78 164 168 169
  2964. 6.5: Exceptions
  2965. 63 64 65 66 67 68 69 70 71 72 168
  2966. 6.6: Remarks
  2967. 23
  2968. Page 72
  2969. Cross-reference between ANSI Standard and Test Programs (cont.)
  2970. Section 7: Expressions
  2971. 7.2: Syntax
  2972. 24 25 26 36 37 38 39 40 41 42 43 61 143 144 145
  2973. 146 147 148 149 150 151 155 157 158 159 164 165 166
  2974. 7.4: Semantics
  2975. 24 25 26 33 35 39 40 41 42 43 61 143 144 145 146
  2976. 147 148 149 150 151 155 157 158 159 164 165 166 169 175 178
  2977. 181 184
  2978. 7.5: Exceptions
  2979. 28 29 31 32 35 167 168 170 173 174 176 177 180 182 183
  2980. 7.6: Remarks
  2981. 39 40 41 42 43 117 119 120 121 124 127 128 169 175 178
  2982. 184
  2983. Section 8: Implementation-Supplied Functions
  2984. 8.2: Syntax
  2985. 130 131 143 144 145 146 147 148 149 150 164
  2986. 8.4: Semantics
  2987. 114 115 116 117 119 120 121 123 124 127 128 130 131 132 133
  2988. 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148
  2989. 149 150 164 167 169
  2990. 8.5: Exceptions
  2991. 118 122 125 126 129 171 172 174 179
  2992. 8.6: Remarks
  2993. 123 175 181
  2994. Section 9: The Let-Statement
  2995. 9.2: Syntax
  2996. 6 11 12 56 57 58 185 194 195 205 207 208
  2997. 9.4: Semantics
  2998. 6 11 12 14 56 57 58 185
  2999. 9.5: Exceptions
  3000. 7
  3001. Page 73
  3002. Cross-reference between ANSI Standard and Test Programs (cont.)
  3003. Section 10: Control Statements
  3004. 10.2: Syntax
  3005. 5 15 17 18 19 20 46 88 166 176 177 178 179 180 181
  3006. 206
  3007. 10.4: Semantics
  3008. 5 15 16 17 18 19 21 27 46 85 87 88 91 166
  3009. 10.5: Exceptions
  3010. 86 89 90 180 181
  3011. Section 11: For-Statements and Next-Statements
  3012. 11.2: 44 Syntax 45 46 47 48 49 50 51 166 182 183 184
  3013. 11.4: 44 Semantics 45 46 47 48 49 50 51 52 53 54 55 166
  3014. Section 12: The Print-Statement
  3015. 12.2: Syntax
  3016. 1 6 165 172 173 174 175 192 193 203 204
  3017. 12.4: Semantics
  3018. 1 6 7 9 10 11 12 13 14 165 192 193 203
  3019. 12.5: Exceptions
  3020. 8
  3021. Section 13: The Input-Statement
  3022. 13.2: Syntax 107 108 109 110 113
  3023. 13.4: Semantics
  3024. 107 108 109 110 111 112
  3025. 13.5: Exceptions
  3026. 112
  3027. Section 14: The Data-, Read-, and Restore-Statements
  3028. 14.2: Syntax
  3029. 92 93 94 95 102 103 104 105 106
  3030. 14.4: Semantics
  3031. 92 93 94 95 96
  3032. 14.5: Exceptions 97 98 99 100 101
  3033. Page 74
  3034. Cross-reference between ANSI Standard and Test Programs (cont.)
  3035. Section 15: Array-Declarations
  3036. 15.2: Syntax
  3037. 56 57 58 62 65 66 67 68 69 70 71 72
  3038. 15.4: Semantics
  3039. 56 57 58 62 65 66 67 68 69 70 71 72 73 74 75
  3040. 76 80 81 82 83 84
  3041. Section 16: User-Defined Functions
  3042. 16.2: Syntax
  3043. 151 152 155 157 158 159 164 171
  3044. 16.4: Semantics
  3045. 151 153 154 155 156 157 158 159 160 161 162 163 164 167
  3046. Section 17: The Randomize Statement
  3047. 17.2: Syntax
  3048. 131
  3049. 17.4: Semantics
  3050. 131
  3051. Section 18: The Remark-Statement
  3052. 18.2: Syntax
  3053. 15
  3054. 18.4: Semantics
  3055. 15
  3056. Page 75
  3057. Appendix A
  3058. Differences between Versions 1 and 2 of
  3059. the Minimal BASIC Test Programs
  3060. In the development of Version 2, we introduced a wide
  3061. variety of changes in the test system. Some were substantive,
  3062. some stylistic. Below is a list of the more significant
  3063. differences.
  3064. 1. Perhaps the most extensive change has to do with the more
  3065. complete treatment of the errors and exceptions which must be
  3066. detected and reported by a conforming processor. We've tried
  3067. to make clear the distinction between the two and just what
  3068. conformance entails in each case. Also, Version 2 tests a
  3069. wider variety of anomalous conditions for the processor to
  3070. handle. It is in this area of helpful recovery from
  3071. programmer mistakes that the Minimal BASIC standard imposes
  3072. stricter requirements than other language standards and the
  3073. tests reflect this emphasis.
  3074. 2. Version 2 differs significantly from Version 1 in its
  3075. treatment of accuracy requirements. We abandoned any attempt
  3076. to compute internal accuracy for the purpose of judging
  3077. conformance as being too vulnerable to the problems of
  3078. circularity. Rather we formulated a criterion of accuracy,
  3079. and computed the required results outside the program itself.
  3080. The programs therefore generally contain only simple IF
  3081. statements comparing constants or variables (no lengthy
  3082. expressions). Those test sections where we did attempt some
  3083. internal computation of accuracy, e.g., the error measure and
  3084. computation of accuracy of constants and variables, are
  3085. informative only.
  3086. 3. There are a number of new informative tests for the RND
  3087. function. These are to help users whose applications are
  3088. strongly dependent on a nearly patternless RND sequence.
  3089. 4. The overall structure of the test system is more explicit.
  3090. The group numbering should help to explain why testing of
  3091. certain sections of the ANSI standard had to precede others.
  3092. Also, it should be easier to isolate the programs relevant to
  3093. the testing of a given section by referring to the group
  3094. structure.
  3095. 5. We tried to be especially careful to keep the printed output
  3096. of the various tests as consistent as their subject matter
  3097. would allow. In particular, we always made sure that the
  3098. programs stated as explicitly as possible what was necessary
  3099. for the test to pass or fail and that this message was
  3100. surrounded by triple asterisks.
  3101. Page 76
  3102. References
  3103. 1. American National Standard for Minimal BASIC, X3.60-1978,
  3104. American National Standards Institute, New York New York,
  3105. January 1978.
  3106. 2. J. A. Lee, A Candidate Standard for Fundamental BASIC,
  3107. NBS-GCR 73-17, National Bureau of Standards, Washington DC,
  3108. July 1973
  3109. 3. T. R. Hopkins, PBASIC - A Verifier for BASIC, Software -
  3110. Practice and Experience, Vol. 10, 175-181 (1980)
  3111. 4. D. E. Knuth, The Art of Computer Programming, Vol. 2,
  3112. Addison-Wesley Publishing Company, Reading Massachusetts
  3113. (1969)
  3114. BS-114A (REV. 2-SC)
  3115. U.S. DEPT. OF COMM. -1. PUBLICATION OR 2. Performing Organ. Report No 3. Publication Date
  3116. REPORT NO.
  3117. BIBLIOGRAPHIC DATA November 1980
  3118. SHEET (See instructions) NBS SP 500-70/1
  3119. 4. TITLE AND SUBTITLE Ctmputer Science and Technology
  3120. NBS Minimal BASIC Test Programs - Version 2 - User's Manual,Volume 1 - Documentation
  3121. 5. AUTHOR(S)
  3122. John V. Cugini, Joan S. Bowden, Mark W. Skall
  3123. 6. PERFORMING ORGANIZATION (If joint or other than NBS, see instructions) 7. Contract/Grant No. •
  3124. NATIONAL BUREAU OF STANDARDS
  3125. DEPARTMENT OF COMMERCE I. Type of Report 8 Period Covered
  3126. WASHINGTON, D.C. 20234
  3127. Final
  3128. 9. SPONSORING ORGANIZATION NAME AND COMPLETE ADDRESS (Street, City. State. ZIP)
  3129. Same as item 6.
  3130. v10. SUPPLEMENTARY NOTES
  3131. Library of Congress Catalog Card Number: 80-600163
  3132. [] Document describes a computer program; SF-I8S, FIPS Software Summary, is attached.
  3133. 11. ABSTRACT (A 200-word or less factual summary of most significant information. If document includes a significant
  3134. bibliography or literature survey, mention it here)
  3135. This publication describes the set of programs developed by NBS for the
  3136. purpose of testing conformance of implementations of the computer language BASIC
  3137. to the American National Standard for Minimal BASIC, ANSI X3.60-1978. The
  3138. Department of Commerce has adopted this ANSI standard as Federal Information
  3139. Processing Standard 68. By submitting the programs to a candidate implementation,
  3140. the user can test the various features which an implementation must support in
  3141. order to conform to the standard. While some programs can determine whether or
  3142. not a given feature is correctly implemented, others produce output which the
  3143. user must then interpret to some degree. This manual describes how the programs
  3144. should be used so as to interpret correctly the results of the tests. Such
  3145. interpretation depends strongly on a solid understanding of the conformance rules
  3146. laid down in the standard, and there is a brief discussion of these rules and
  3147. how they relate to the test programs and to the various ways in which the language
  3148. may be implemented.
  3149. , ,
  3150. 12. KEY WORDS (Six to twelve entries; alphabetical order; capitalize only proper names; and separate key words by semicolons)
  3151. Basic; language processor testing; minimal basic; programming language standards;
  3152. software standards; software testing
  3153. ¦
  3154. '13. AVAILABILITY 14. NO. OF
  3155. PRINTED PAGES
  3156. PC Unlimited
  3157. Li For Official Distribution. Do Not Release to NTIS 79
  3158. rid Order From Superintendent of Documents, U.S. Government Printing Office, Washington, D C
  3159. 20402. 15. Price
  3160. E j Order From National Technical Information Service (NTIS), Springfield, VA. 22161 $4.00
  3161. *U.S. GOVERNMENT PRINTING OFFICE: 1980-331'021/6715 USCOMM-DC 6043-P80
  3162. ,
  3163. Libranes of Notre Dame /15Y4o7
  3164. 0 518 903
  3165. NBS TECHNICAL PUBLICATIONS
  3166. PERIODICALS NOTE: The principal publication outlet for the foregoing data is
  3167. the Journal of Physical and Chemical Reference Data (JPCRD)
  3168. JOURNAL OF RESEARCH—The Journal of Research of the published quarterly for NBS by the American Chemical Society
  3169. National Bureau of Standards reports NBS research and develop- (ACS) and the American Institute of Physics (A I P). Subscriptions,
  3170. ment in those disciplines of the physical and engineering sciences in reprints, and supplements available from ACS, 1155 Sixteenth St.,
  3171. which the Bureau is active. These include physics, chemistry, NW, Washington, DC 20056.
  3172. engineering, mathematics, and computer sciences. Papers cover a
  3173. broad range of subjects, with major emphasis on measurement Building Science Series—Disseminates technical information
  3174. methodology and the basic technology underlying standardization. developed at the Bureau on building materials, components,
  3175. Also included from time to time arc survey articles on topics systems, and whole structures. The series presents research results,
  3176. closely related to the Bureau's technical and scientific programs. test methods, and performance criteria related to the structural and
  3177. As a special service to subscribers each issue contains complete environmental functions and the durability and safety charac-
  3178. citations to all recent Bureau publications in both NBS and non- teristics of building elements and systems.
  3179. NBS media. Issued six times a year. Annual subscription: domestic
  3180. I 3: foreign 516.25. Single copy. S3 domestic: $3.75 foreign. Technical Notes—Studies or reports which are complete in them-
  3181. selves but restrictive in their treatment of a subject. Analogous to
  3182. NOTE: The Journal was formerly published in two sections: Sec- monographs but not so comprehensive in scope or definitive in
  3183. tion A "Physics and Chemistry" and Section B "Mathematical treatment of the subject area. Often serve as a vehicle for final
  3184. Sciences." reports of work performed at N BS under the sponsorship of other
  3185. DIMENSIONS/NBS—This monthly magazine is published to in- government agencies.
  3186. form scientists, engineers. bus.ncss a d industry leaders, teachers,
  3187. students, and consumers of the I. test advances in science and Voluntary Product Standards—Developed under procedures
  3188. technology, with primary empnasiN on v. - rk at N BS. The magazine published by the Department of Commerce in Part 10, Title 15, of
  3189. highlights and reviews such issues as energy ::.search, fire protec- the Code of Federal Regulations. The standards establish
  3190. tion, building technology, metric conversicr, pollution abatement, nationally recognized requirements for products, and provide all
  3191. health and safety, and consumer product performance. In addi- concerned interests with a basis for common understanding of the
  3192. tion, it reports the results of Bureau programs in measurement characteristics of the products. NBS administers this program as a
  3193. standards and techniques, properties of matter and materials, supplement to the activities of the private sector standardizing
  3194. engineering standards and services, instrumentation, and organizations.
  3195. automatic data processing. Annual subscription: domestic SI I; Consumer Information Series—Practical information, based on
  3196. foreign $13.75. N BS research and experience, covering areas of interest to the con-
  3197. NONPERIODICALS sumer. Easily understandable language and illustrations provide
  3198. useful background knowledge for shopping in today's tech-
  3199. Monographs—Major contributions to the technical literature on nological marketplace.
  3200. various subjects related to the Bureau's scientific and technical ac- Order the above NBS publications from: Superintendent of Docu-
  3201. tivities. ments, Government Printing Office. Washington. DC 20402.
  3202. Handbooks—Recommended codes of engineering and industrial Order the following NBS publications—FIPS and NBSIR's—from
  3203. practice (including safety codes) developed in cooperation with in- the National Technical Information Services, Springfield. VA 22161.
  3204. terested industries, professional organizations, and regulatory
  3205. bodies. Federal Information Processing Standards Publications (FIPS
  3206. Special Publications—Include proceedings of conferences spon- PUB)—Publications in this series collectively constitute the
  3207. sored by NBS, NBS annual reports, and other special publications Federal Information Processing Standards Register. The Register
  3208. appropriate to this grouping such as wall charts, pocket cards, and serves as the official source of information in the Federal Govern-
  3209. bibliographies. ment regarding standards issued by NBS pursuant to the Federal
  3210. Applied Mathematics Series—Mathematical tables, manuals, and Property and Administrative Services Act of 1949 as amended,
  3211. studies of special interest to physicists, engineers, chemists, Public Law 89-306 (79 Stat. 1127), and as implemented by Ex-
  3212. biologists, mathematicians, computer programmers, and others ecutive Order 11717 (38 FR 12315, dated May I I, 1973) and Part 6
  3213. engaged in scientific and technical work. of Title 15 CFR (Code of Federal Regulations).
  3214. National Standard Reference Data Series—Provides quantitative NBS Interagency Reports (NBSIR )—A special series of interim or
  3215. data on the physical and chemical properties of materials, com- final reports on work performed by NBS for outside sponsors
  3216. piled from the world's literature and critically evaluated. (both government and non-government). In general, initial dis-
  3217. Developed under a worldwide program coordinated by NBS under tribution is handled by the sponsor; public distribution is by the
  3218. the authority of the National Standard Data Act (Public Law National Technical Information Services, Springfield, VA 22161,
  3219. 90-396). in paper copy or microfiche form.
  3220. BIBLIOGRAPHIC SUBSCRIPTION SERVICES
  3221. The following current-awareness and literature-survey bibliographies Superconducting Devices and Material.. A literature survey issued
  3222. are issued periodically by the Bureau: quarterly. Annual subscription: S45. Please send subscription or-
  3223. Cryogenic Data ('enter Current Awareness Service. A literature sur- ders and remittances for the preceding bibliographic services to the
  3224. vey issued biweekly. Annual subscription. domestic S35: foreign National Bureau of Standards, Cryogenic Data Center (736)
  3225. S45 Boulder, CO 80303.
  3226. Liquefied Natural Gas. A literati,' e sur issued quarterly•nnuJ1
  3227. suhsolption S30
  3228. U.S. DEPARTMENT OF COMMERCE
  3229. National Bureau of Standards
  3230. Washington. D.C. 20234
  3231. POSTAGE ANO FEES PAID
  3232. OFFICIAL BUSINESSU.S. DEPARTMENT OF COMMERCE
  3233. COM¦215
  3234. Penalty for Private Use. $300
  3235. 111
  3236. SPECIAL FOURTH-CLASS RATE
  3237. BOOK