EECS 280 Project 5: Machine LearningDue Friday, 13 April 2018, 8pmIn this project, you will write a program that uses natural language processing and machinelearning techniques to automatically identify the subject of posts from the EECS 280 Piazza. Youwill gain experience with recursion, binary trees, templates, comparators, and the map datastructure. Another goal is to prepare you for future courses (like EECS 281) or your ownindependent programming projects, so we have given you a lot of freedom to design the structureof your overall application.The correctness portion of the final submission is worth approximately 70%, with the remainingapproximately 30% based on the thoroughness of your BST test cases and style grading. Yourtest cases and style will both by graded by the autograder.Winter 2018: We will use the same automated style grading on this project that we did for project4. On this project, the automated style checks will be part of the grade. To run the tests on yourown, check out the style checking tutorial.You may work alone or with a partner. Please see the syllabus for partnership rules.Table of ContentsProject RoadmapProject IntroductionProject EssentialsThe BinarySearchTree ADTTesting BinarySearchTreeThe Map ADTTesting MapThe Piazza DatasetsClassifying Piazza Posts with NLP and MLThe Bag of Words ModelTraining the Classifier4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 2/21Predicting a Label for a New PostImplementing Your Top?Level Classifier ApplicationClassifier Application InterfaceOutputResultsAppendix A: Map ExampleAppendix B: Splitting a Whitespace?Delimited StringProject Roadmap1. Set up your IDEUse the tutorial from project 1 to get your visual debugger set up. Use this wget linkhttps://eecs280staff.github.io/p5‐ml/starter‐files.tar.gz .Before setting up your visual debugger, you’ll need to rename each .h.starter file to a .hfile.$ mv BinarySearchTree.h.starter BinarySearchTree.h$ mv Map.h.starter Map.hYou’ll also need to create these new files and add function stubs.$ touch main.cppThese are the executables you’ll use in this project:BinarySearchTree_compile_check.exeBinarySearchTree_public_test.exeBinarySearchTree_tests.exeMap_compile_check.exeMap_public_test.exemain.exeIf you’re working in a partnership, set up version control for a team.2. Read the Project Introduction and Project Essentials4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 3/21See the first sections below for an introduction to the project as well as essential instructionsfor successfully completing the project.3. Test and implement the BinarySearchTree data structureWe’ve provided header files with comments. Test and implement those functions. Be sure touse recursion and tail recursion where the comments require it.4. Test and implement the Map data structureImplement and test a Map ADT that internally uses your BinarySearchTree to provide aninterface that works (almost) exactly like std::map from the STL! Appendix A has an example.5. Test and implement the Piazza Classifier ApplicationThis specification describes the interface for the overall application, but it’s up to you how toseparate it into functions and data structures.Appendix B has tips and tricks for this part.Submit to the AutograderSubmit the following files to the autograder.BinarySearchTree.hMap.hmain.cppBinarySearchTree_tests.cppProject IntroductionThe goal for this project is to write an intelligent program that can classify Piazza posts accordingto topic. This task is easy for humans ? we simply read and understand the content of the post,and the topic is intuitively clear. But how do we compose an algorithm to do the same? We can’tjust tell the computer to “look at it” and understand. This is typical of problems in artificialintelligence and natural language processing.4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 4/21We know this is about Euchre, but how can we write an algorithm that “knows” that?With a bit of introspection, we might realize each individual word is a bit of evidence for the topicabout which the post was written. Seeing a word like “card”, “spades”, or even “bob” leads ustoward the Euchre project. We judge a potential label for a post based on how likely it is given allthe evidence. Along these lines, information about how common each word is for each topicessentially constitutes our classification algorithm.But we don’t have that information (i.e. that algorithm). You could try to sit down and write out alist of common words for each project, but there’s no way you’ll get them all. For example, theword “lecture” appears much more frequently in posts about exam preparation. This makessense, but we probably wouldn’t come up with it on our own. And what if the projects change? Wedon’t want to have to put in all that work again.Instead, let’s write a program to comb through Piazza posts from previous terms (which arealready tagged according to topic) and learn which words go with which topics. Essentially, theresult of our program is an algorithm! This approach is called (supervised) machine learning. Oncewe’ve trained the classifier on some set of Piazza posts, we can apply it to new ones written in thefuture.4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 5/21AuthorsThis project was developed for EECS 280, Fall 2016 at the University of Michigan. Andrew DeOrioand James Juett wrote the original project and specification. Amir Kamil contributed to codestructure, style, and implementation details.Project EssentialsThe project consists of three main phases:1. Implement and test the static _impl member functions in BinarySearchTree .2. Implement and test Map by using the has?a pattern on top of BinarySearchTree .3. Design, implement, and test the top?level classifier application.The focus of part 1 is on working with recursive data structures and algorithms. The frameworkand some of the implementation for BinarySearchTree is provided for you, but you mustimplement the core functionality in several static member functions. Be mindful of requirementsfor which implementations must use certain kinds of recursion.Part 2 should not require a lot of additional implementation code. Make sure to reuse thefunctionality already present in BinarySearchTree wherever possible.For your top?level application, you must use std::map in place of Map . This means a bug inparts 1 or 2 will not jeopardize your ability to complete part 3. Additionally, the implementation ofBinarySearchTree (and consequently Map ) we have you write will not be fast enough for theclassifier.Requirements and RestrictionsDO DO NOTPut all top?level application code in main.cpp.Create additional files other thanmain.cpp.Create any ADTs or functions you wish for yourtop?level classifier application.Modify the BinarySearchTree or Mappublic interfacesUse any part of the STL for your top level classifierapplication, including map and set.Use STL containers in yourimplementation of BinarySearchTree orMap.4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 6/21DO DO NOTUse any part of the STL except for containers inyour BinarySearchTree and Map implementations.Use your Map implementation for thetop level application. It will be too slow.Use recursion for the BST _impl functions.Use iteration for the BST _implfunctions.Follow course style guidelines. Use static or global variables.Starter FilesThe following table describes each file included in the starter code. As you begin development,rename files to remove .starter .Filename DescriptionBinarySearchTree.h.starter Defines an ADT for a binary search tree.BinarySearchTree_tests.cpp Add your BST tests to this file.BinarySearchTree_public_test.cpp A public test for BinarySearchTreeBinarySearchTree_compile_check.cpp A compilation test for BinarySearchTree.hTreePrint.hAuxiliary file to support printing trees. You do notneed to look at this file. Do not modify it.Map.h.starter Map ADTMap_public_test.cppA sample test for Map. You are encouraged to writemap tests, but do not submit them.Map_public_test.out.correct Correct output for the Map public test.Map_compile_check.cpp A compilation test for Map.h.Piazza Datasets (Four .csv files)Piazza post data from several past EECS 280 terms inComma Separated Value (CSV) format.csvstream.h A library for reading data in CSV format.4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 7/21Filename Descriptiontrain_small.csvtest_small.csvtest_small.out.correcttest_small_debug.out.correctSample input training and testing files for theclassifier application, as well as the correspondingcorrect output when run with those files.MakefileUsed by the make command to compile theexecutable.unit_test_framework.hunit_test_framework.cppThe unit test framework you must use to write yourtest cases.The BinarySearchTree ADTA binary search tree supports efficiently storing and searching for elements.Template ParametersBinarySearchTree has two template parameters:T ? The type of elements stored within the tree.Compare ? The type of comparator object (a functor) that should be used to determinewhether one element is less than another. The default type is std::less , which comparestwo T objects with the comparator type must be specified.No Duplicates InvariantIn the context of this project, duplicate values are NOT allowed in a BST. This does not need to bethe case, but it avoids some distracting complications.Sorting InvariantA binary search tree is special in that the structure of the tree corresponds to a sorted ordering ofelements and allows efficient searches (i.e. in logarithmic time).Every node in a well?formed binary search tree must obey this sorting invariant:It represents an empty tree (i.e. a null Node* ).OR 4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 8/21The left subtree obeys the sorting invariant, and every element in the left subtree is less thanthe root element (i.e. this node).�The right subtree obeys the sorting invariant, and the root element (i.e. this node) is less thanevery element in the right subtree.Put briefly, go left and you’ll find smaller elements. Go right and you’ll find bigger ones. Forexample, the following are all well?formed sorted binary trees:Data RepresentationThe data representation for BinarySearchTree is a tree?like structure of nodes similar to thatdescribed in lecture. Each Node contains an element and pointers to left and right subtrees. Thestructure is self?similar. A null pointer indicates an empty tree. You must use this datarepresentation. Do not add member variables to BinarySearchTree or Node .Public Member Functions and Iterator InterfaceThe public member functions and iterator interface for BinarySearchTree are already implementedin the starter code. DO NOT modify the code for any of these functions. They delegate the workto private, static implementation functions, which you will write.Implementation Functions4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 9/21The core of the implementation for BinarySearchTree is a collection of private, static memberfunctions that operate on tree?like structures of nodes. You are responsible for writing theimplementation of several of these functions.To disambiguate these implementation functions from the public interface functions, we haveused names ending with _impl . (This is not strictly necessary, because the compiler candifferentiate them based on the Node* parameter.)There are a few keys to thinking about the implementation of these functions:The functions have no idea that such a thing as the BinarySearchTree class exists, andthey shouldn’t. A “tree” is not a class, but simply a tree?shaped structure of Node s. Theparameter node points to the root of these nodes.A recursive implementation depends on the idea of similar subproblems, so a “subtree” isjust as much a tree as the “whole tree”. That means you shouldn’t need to think about “whereyou came from” in your implementation.Every function should have a base case! Start by writing this part.You only need to think about one “level” of recursion at a time. Avoid thinking about thecontents of subtrees and take the recursive leap of faith.We’ve structured the starter code so that the first bullet point above is actually enforced by thelanguage. Because they are static member functions, they do not have access to a receiverobject (i.e. there’s no this pointer). That means it’s actually impossible for these functions to tryto do something bad with the BinarySearchTree object (e.g. trying to access the root membervariable).Instead, the implementation functions are called from the regular member functions to performspecific operations on the underlying nodes and tree structure, and are passed only a pointer tothe root Node of the tree/subtree they should work with.The empty_impl function must run in constant time. It must must be able to determine and returnits result immediately, without using either iteration or recursion. The rest of the implementationfunctions must be recursive. There are additional requirements on the kind of recursion that mustbe used for some functions. See comments in the starter code for details. Iteration (i.e. usingloops) is not allowed in any of the _impl functions.Using the ComparatorThe _impl functions that need to compare data take in a comparator parameter called less .Make sure to use less rather than the The insert_impl Function4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 10/21The key to properly maintaining the sorting invariant lies in the implementation of the insert_implfunction this is essentially where the tree is built, and this function will make or break the wholeADT. Your insert_impl function should follow this procedure:1. Handle an originally empty tree as a special case.2. Insert the element into the appropriate place in the tree, keeping in mind the sorting invariant.You’ll need to compare elements for this, and to do so make sure to use the less comparatorpassed in as a parameter.3. Use the recursive leap of faith and call insert_impl itself on the left or right subtree. Hint:You do need to use the return value of the recursive call. (Why?)Important: When recursively inserting an item into the left or right subtree, be sure to replace theold left or right pointer of the current node with the result from the recursive call. This is essential,because in some cases the old tree structure (i.e. the nodes pointed to by the old left or rightpointer) is not reused. Specifically, if the subtree is empty, the only way to get the current node to“know” about the newly allocated node is to use the pointer returned from the recursive call.Technicality: In some cases, the tree structure may become unbalanced (i.e. too many nodes onone side of the tree, causing it to be much deeper than necessary) and prevent efficient operationfor large trees. You 代写EECS 280作业、代做Machine Learning作业、代写C/C++课程设计作业、C/C++编程语言作业调don’t have to worry about this.Testing BinarySearchTreeYou must write and submit tests for the BinarySearchTree class. Your test cases MUST use theunit test framework, otherwise the autograder will not be able to evaluate them. Since unit testsshould be small and run quickly, you are limited to 50 TEST() items per file, and your whole testsuite must finish running in less than 5 seconds. Please bear in mind that you DO NOT need 50unit tests to catch all the bugs. Writing targeted test cases and avoiding redundant tests can helpcatch more bugs in fewer tests.How We Grade Your TestsWe will autograde your BinarySearchTree unit tests by running them against a number ofimplementations of the module. If a test of yours fails for one of those implementations, that isconsidered a report of a bug in that implementation.We grade your tests by the following procedure:1. We compile and run your test cases with a correct solution. Test cases that pass areconsidered valid. Tests that fail (i.e. falsely report a bug in the solution) are invalid. Theautograder gives you feedback about which test cases are valid/invalid. Since unit tests4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 11/21should be small and run quickly, your whole test suite must finish running in less than 5seconds.2. We have a set of intentionally incorrect implementations that contain bugs. You get pointsfor each of these “buggy” implementations that your valid tests can catch.3. How do you catch the bugs? We compile and run all of your valid test cases against eachbuggy implementation. If any of these test cases fail (i.e. report a bug), we consider that youhave caught the bug and you earn the points for that bug.The Map ADTThe Map ADT works just like std::map . Map has three template parameters for the types of keysand values, as well as a customizable comparator type for comparing keys. The most importantfunctions are find, insert, and the [] operator. The RMEs and comments in Map.h provide thedetails, and appendix A includes an example.Note: Although you must implement Map , use std::map instead in your top?level application. Ourimplementation of Map is not fast enough for the classifier.Building on the BSTThe operation of a map is quite similar to that of a BST. The additional consideration for a map isthat we want to store key?value pairs instead of single elements, but also have any comparisons(e.g. for searching) only depend on the key and be able to freely change the stored values withoutmessing up the BST sorting invariant. We can employ the has?a pattern using a BinarySearchTreeas the data representation for Map:BST template parameter: TInstantiate with: Pair_typeWe’ve provided a using declaration in the starter code for Pair_type :using Pair_type = std::pair;std::pair is basically like a struct that stores two objects together. Key_type andValue_type are whatever template parameters were used to instantiate Map .BST template parameter: CompareInstantiate with: PairComp4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 12/21You’ll need to define your own comparator by declaring a functor type called PairComp (orwhatever you want to call it) in your Map class. The overloaded () operator should accepttwo objects of Pair_type and return whether the key of the LHS is less than the key of theRHS (according to Key_compare ).Finally, we can even reuse the iterators from the BST class, since the interface we want (based onstd::map ) calls for iterators to yield a key?value pair when dereferenced. Since the element typeT of the BST is our Pair_type , BST iterators will yield pairs and will work just fine. We’veprovided this using declaration with the starter code to make Map::Iterator simply an alias foriterators from the corresponding BST:using Iterator = typename BinarySearchTree::Iterator;Testing MapYou are encouraged to write tests for the Map ADT, but they are not required for the projectsubmission. Do not submit them to the autograder.The Piazza DatasetsFor this project, we retrieved archived Piazza posts from EECS 280 in past terms. We will focus ontwo different ways to divide Piazza posts into labels (i.e. categories).By topic. Labels: “exam”, “calculator”, “euchre”, “image”, “recursion”, “statistics”Example: Posts extracted from w16_projects_exam.csvlabel contentexam will final grades be posted within 72 hourscalculator can we use the friend class list in stackeuchre weird problem when i try to compile euchrecppimage is it normal for the horses tests to take 10 minutesrecursion is an empty tree a sorted binary treestatistics are we supposed to have a function for summary… …4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 13/21By author. Labels: “instructor”, “student”Example: Posts extracted from w14?f15_instructor_student.csvlabel contentinstructor disclaimer not actually a party just extra OHstudent how can you use valgrind with calccppstudent could someone explain to me what the this keyword means… …The Piazza datasets are Comma Separated Value (CSV) files. The label for each post is found inthe “tag” column, and the content in the “content” column. There may be other columns in theCSV file; your code should ignore all but the “tag” and “content” columns. You may assume allPiazza files are formatted correctly, and that post content and labels only containlowercase characters, numbers, and no punctuation. We recommend using the csvstream.hlibrary (see https://github.com/awdeorio/csvstream for documentation) to read CSV files in yourapplication. The csvstream.h file itself is included with the starter code.Your classifier should not hardcode any labels. Instead, it should use the exact set oflabels that appear in the training data.Appendix B contains code for splitting a string of content into a set of individual words.We have included several Piazza datasets with the project:train_small.csv ? Made up training data intended for small?scale testing.test_small.csv ? Made up test data intended for small?scale testing.w16_projects_exam.csv ? (Train) Real posts from W16 labeled by topic.sp16_projects_exam.csv ? (Test) Real posts from Sp16 labeled by topic.w14‐f15_instructor_student.csv ? (Train) Real posts from four terms labeled by author.w16_instructor_student.csv ? (Test) Real posts from W16 Piazza labeled by author.For the real datasets, we have indicated which are intended for training vs. testing.Classifying Piazza Posts with NLP and MLAt a high level, the classifier we’ll implement works by assuming a probabilistic model of howPiazza posts are composed, and then finding which label (e.g. our categories of “euchre”, “exam”,4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 14/21etc.) is the most probable source of a particular post.All the details of natural language processing (NLP) and machine learning (ML) techniques youneed to implement the project are described here. You are welcome to consult other resources,but there are many kinds of classifiers that have subtle differences. The classifier we describehere is a simplified version of a “Multi?Variate Bernoulli Naive Bayes Classifier”. If you find otherresources, but you’re not sure they apply, make sure to check them against this specification.This document provides a more complete description of the way the classifier works, in caseyou’re interested in the math behind the formulas here.The Bag of Words ModelWe will treat a Piazza post as a “bag of words” ? each post is simply characterized by whichwords it includes. The ordering of words is ignored, as are multiple occurrences of the same word.These two posts would be considered equivalent:“the left bower took the trick”“took took trick the left bower bower”Thus, we could imagine the post generation process as a person sitting down and going throughevery possible word and deciding which to toss into a bag.Background: Conditional Probabilities and NotationWe write to denote the probability (a number between 0 and 1) that some event willoccur. denotes the probability that event will occur given that we already know eventhas occurred. For example, . This means that if a Piazza post isabout the euchre project, there is a 0.7% chance it will contain the word bower (we should say “atleast once”, technically, because of the bag of words model).Training the ClassifierBefore the classifier can make predictions, it needs to be trained on a set of previously labeledPiazza posts (e.g. train_small.csv or w16_projects_exam.csv ). Your application should processeach post in the training set, and record the following information:The total number of posts in the entire training set.The number of unique words in the entire training set. (The vocabulary size.)For each word , the number of posts in the entire training set that contain .For each label , the number of posts with that label.4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 15/21For each label and word , the number of posts with label that contain .Predicting a Label for a New PostGiven a new Piazza post , we must determine the most probable label , based on what theclassifier has learned from the training set. A measure of the likelihood of C is the log probabilityscore given the post:Important: Because we’re using the bag?of?words model, the words w , w , …, w in this formulaare only the unique words in the post, not including duplicates! To ensure consistent results, makesure to add the contributions from each word in alphabetic order.The classifier should predict whichever label has the highest log?probability score for the post. Ifmultiple labels are tied, predict whichever comes first alphabetically.is the log prior probability of label and is a reflection of how common it is:is the log likelihood of a word given a label , which is a measure of how likely it isto see word in posts with label . The regular formula for is:However, if was never seen in a post with label in the training data, we get a log?likelihood of∞, which is no good. Instead, use one of these two alternate formulas:(Use when does not occur in posts labeled but does occur in the training data overall.)(Use when does not occur anywhere at all in the training set.)1 2 n4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 16/21Implementing Your Top?Level Classifier ApplicationFor submission to the autograder, your top?level application code must be entirelycontained in a single file, main.cpp . However, the structure of your classifier application,including which procedural abstractions and/or ADTs to use for the classifier, is entirely up to you.Make sure your decisions are informed by carefully considering the classifier and top?levelapplication described in this specification.We strongly suggest you make a class to represent the classifier ? the private data members forthe class should keep track of the classifier parameters learned from the training data, and thepublic member functions should provide an interface that allows you to train the classifier andmake predictions for new piazza posts.Here is some high?level guidance:1. First, your application should read posts from a file (e.g. train_small.csv ) and use them totrain the classifier. After training, your classifier abstraction should store the informationmentioned in the “Training the Classifier” section above.2. Your classifier should be able to compute the log?probability score of a post (i.e. a collectionof words) given a particular label. To predict a label for a new post, it should choose the labelthat gives the highest log?probability score.3. Read posts from a file (e.g. test_small.csv ) to use as testing data. For each post, predict alabel using your classifier.Some of these steps have output associated with them. See the “output” section below for thedetails.You must also write RMEs and appropriate comments to describe the interfaces for theabstractions you choose (ADTs, classes, functions, etc.). You should also write unit tests to verifyeach component works on its own.You are welcome to use any part of the STL in your top?level classifier application. In particular,std::map and std::set will be useful.Classifier Application InterfaceHere is the usage message for the top?level application:$ ./main.exeUsage: main.exe TRAIN_FILE TEST_FILE [‐‐debug]4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 17/21The main application always requires files for both training and testing, although the test file maybe empty. You may assume all files are in the correct format.Use the provided small?scale files for initial testing and to check your output formatting:$ ./main.exe train_small.csv test_small.csv$ ./main.exe train_small.csv test_small.csv ‐‐debugCorrect output is in test_small.out.correct and test_small_debug.out.correct . The outputformat is discussed in detail below.Error CheckingThe program checks that the command line arguments obey the following rules:There are 3 or 4 arguments, including the executable name itself (i.e. argv[0] ).The fourth argument (i.e. argv[3] ), if provided, must be ‐‐debug .If any of these are violated, print out the usage message and then quit by returning a non?zerovalue from main . Do not use the exit library function, as this fails to clean up local objects.cout If any file cannot be opened, print out the following message, where filename is the name of thefile that could not be opened, and quit by returning a non?zero value from main .cout You do not need to do any error checking for command?line arguments or file I/O other than whatis described on this page. However, you must use precisely the error messages given here inorder to receive credit. (Just literally use the code given here to print them.)As mentioned earlier, you may assume all Piazza data files are in the correct format.OutputThis section details the output your program should write to cout, using the small files mentionedabove as an example. Some lines are indented by two spaces. Output only printed when the ‐‐debug flag is provided is indicated here with “(DEBUG)”.4/5/2018 EECS 280 Project 5: Machine Learning | p5-mlhttps://eecs280staff.github.io/p5-ml/ 18/21Add this line at the beginning of your main function to set floating point precision:cout.precision(3);First, print information about the training data:(DEBUG) Line?by?line, the label and content for each training document.training data:label = euchre, content = can the upcard ever be the left bowerlabel = euchre, content = when would the dealer ever prefer a card to the upcardlabel = euchre, content = bob played the same card twice is he cheating...label = calculator, content = does stack need its own big threelabel = calculator, content = valgrind memory error not sure what it meansThe number of training posts.trained on 8 examples(DEBUG) The vocabulary size (the number of unique words in all training content).vocabulary size = 49An extra blank lineIf the debug option is provided, also print information about the classifier trained on the trainingposts. Whenever classes or words are listed, they are in alph转自:http://www.7daixie.com/2019041550452657.html