Linear Regression作业代写、代写linRegData作业、代做c++实验作业、Python/Java程序作业代做代写留学生 Statistic

Task 1 Linear Regression In this task, you will use the dataset linRegData.txt, containing 150 points in the format . The input is generated by a sinusoid function, while the output is the joint trajectory of a compliant robotic arm. The first 20 data points are the training set and the remainder are the testing set.a) Polynomial Features Write the equation of the model and fit it with polynomial features. Using the Root Mean Square Error (RMSE) as a metric for the evaluation, select the complexity of the model (up to a 21st degree polynomial) by evaluating its performance on the testing data. Which is the best RMSE you achieve and what is the model complexity? Does it change if we evaluate our model on the training data? Comment your findings and plot the RMSE for each case(use two lines, one for evaluation on training data, one for evaluation on testing data). For the estimation of theoptimal parameters use a ridge coefficient of λ = 10-6. Using what you think is the best learned model from the previous point, show in a single plot the ground truth (full dataset) and the model prediction over it. Attach snippets of your code showing how you generate polynomial features and how you fit the model. b) Gaussian Features Now use Gaussian features. Each feature is a Gaussian distribution were the means are distributed linearly in x∈[0, 2] and the variance is set to σ2 = 0.02. The features have to be normalized, i.e., they have to sum to one at every x. Using 10 features generate a plot with the activation of each feature over time (i.e., plot the matrix Φ).Attach a snippet of your code showing how to compute Gaussian features. c) Bayesian Linear Regression Using Bayesian linear regression, plot the mean and the standard deviation of the predictive distribution learned using the first {10, 12, 16, 20, 50, 150} data points (one plot per case; plot it in the interval x ∈ [0, 2]). Discuss how the model uncertainty changes with the amount of data points and the problem of overfitting with Bayesian linear regression. Use the best performing polynomial features that you found in 3.1a, a ridge coefficient of λ = 10-6,and assume Gaussian noise with σ2 = 0.0025. Task 2.Linear Classification In this task, you will use the dataset ldaData.txt, containing 137 feature points x. The first 50 points belong to class C1, the second 43 to class C2, the last 44 to class C3. a)Linear Discriminant Analysis Use Linear Discriminant Analysis to classify the points in the dataset, i.e., assume Gaussian distributions in each class with equal covariances and use the posterior distributions for assigning classes. Attach two plots with the data points using a different color for each class: one plot with the original dataset, one with the samples classified according to your LDA classifier. Attach a snippet of your code and discuss the results. How many samples are misclassified? (You are allowed to use built-in functions for computing the mean and the covariance.) Task3.Principal Component Analysis In this task, you will use the dataset iris.txt. It contains data from three kind of Iris flowers (‘Setosa’, ‘Versicolour’ and ‘Virginica’) with 4 attributes: sepal length, sepal width, petal length, and petal width. Each row contains a sample while the last attribute is the label (0 means that the sample comes from a ‘Setosa’ plant, 1 from a ‘Versicolour’ and 2 from ‘Virginica’). (You are allowed to use built-in functions for computing the mean, the covariance, eigenvalues and eigenvectors.) a)Principal Component Analysis Apply PCA on your normalized dataset and generate a plot showing the proportion (percentage) of the cumulative variance explained. How many components do you need in order to explain at least 95% of the dataset variance?Attach a snippet of your code. b)Low Dimensional Space Using as many components as needed to explain 95% of the dataset variance, generate a scatter plot of the lowerdimensional projection of the data. Use different colors or symbols for data points from different classes. What doyou observe? Attach a snippet of your code c)Reconstruct the original dataset by using different number of principal components. Using the normalized root mean square error (NRMSE) as a metric, fill the table below (error per input versus the amount of principal components used).N. of components x1 x2 X3 x4�Attach a snippet of your code. (Remember that in the first step you normalized the data.) 本团队核心人员组成主要包括BAT一线工程师,精通德英语!我们主要业务范围是代做编程大作业、课程设计等等。我们的方向领域:window编程 数值算法 AI人工智能 金融统计 计量分析 大数据 网络编程 WEB编程 通讯编程 游戏编程多媒体linux 外挂编程 程序API图像处理 嵌入式/单片机 数据库编程 控制台 进程与线程 网络安全 汇编语言 硬件编程 软件设计 工程标准规等。其中代写编程、代写程序、代写留学生程序作业语言或工具包括但不限于以下范围:C/C++/C#代写Java代写IT代写Python代写辅导编程作业Matlab代写Haskell代写Processing代写Linux环境搭建Rust代写Data Structure Assginment 数据结构代写MIPS代写Machine Learning 作业 代写Oracle/SQL/PostgreSQL/Pig 数据库代写/代做/辅导Web开发、网站开发、网站作业ASP.NET网站开发Finance Insurace Statistics统计、回归、迭代Prolog代写Computer Computational method代做因为专业,所以值得信赖。如有需要,请加QQ:99515681 或邮箱:[email protected] 微信:codehelp QQ:99515681 或邮箱:[email protected] 微信:codehelp

你可能感兴趣的:(Linear Regression作业代写、代写linRegData作业、代做c++实验作业、Python/Java程序作业代做代写留学生 Statistic)