SDSC6012 Course 4-Autoregressive models
#sdsc6012 English / 中文 Stationarity Strict Stationarity A time series {xt}\{x_t\}{xt} is strictly stationary if and only if for any kkk, any time points t1,t2,…,tkt_1, t_2, \ldots, t_kt1,t2,…,tk, and any time shift hhh, we have: P{xt1≤c1,…,xtk≤ck}=P{xt1+h≤c1,…,xtk+h≤ck}P\{x_{t_1} \leq c_1, \ldots, x_{t_k} \leq c_k\} = P\{x_{t_1+h} \leq c_1, \ldots, x_{t_k+h} \leq c_k\} P{xt1≤c1,…,xtk≤ck}=P{xt1+h≤c1,…,xtk+h≤ck} Core Meaning: Strict stationarity implies that the complete probab...
SDSC6012 课程 4-自回归模型
#sdsc6012 English / 中文 平稳性(Stationarity) 严格平稳性(Strict Stationarity) 时间序列 {xt}\{x_t\}{xt} 是严格平稳的,当且仅当对于任意 kkk、任意时间点 t1,t2,…,tkt_1, t_2, \ldots, t_kt1,t2,…,tk 和任意时间偏移量 hhh,都有: P{xt1≤c1,…,xtk≤ck}=P{xt1+h≤c1,…,xtk+h≤ck}P\{x_{t_1} \leq c_1, \ldots, x_{t_k} \leq c_k\} = P\{x_{t_1+h} \leq c_1, \ldots, x_{t_k+h} \leq c_k\} P{xt1≤c1,…,xtk≤ck}=P{xt1+h≤c1,…,xtk+h≤ck} 核心意义:严格平稳性意味着时间序列的完整概率分布不随时间变化。无论选择哪个时间窗口,其联合分布特性保持不变。这使得从单个时间序列样本中获取的统计量能够成为总体性质的有效估计。 弱平稳性(Weak Stationarity) 时间序列 {xt...
SDSC6007 课程 4-马尔可夫决策过程
#sdsc6007 English / 中文 强化学习的元素 强化学习包含以下五个核心元素: 智能体与环境:智能体执行动作,环境返回观测和奖励 奖励信号:标量反馈信号,指示智能体在时间t的表现 策略:描述智能体行为,是从状态到动作的映射 价值函数:预测预期未来奖励(在特定策略下) 模型:预测环境的行为/回报 智能体与环境的交互 在每个时间步t: 智能体执行动作 AtA_tAt,接收观测 OtO_tOt 和标量奖励 RtR_tRt 环境接收动作 AtA_tAt,发出观测 Ot+1O_{t+1}Ot+1 和标量奖励 Rt+1R_{t+1}Rt+1 历史是观测、动作和奖励的序列: Ht=O1,R1,A1,…,At−1,Ot,RtH_t = O_1, R_1, A_1, \ldots, A_{t-1}, O_t, R_t Ht=O1,R1,A1,…,At−1,Ot,Rt 状态的定义 状态 StS_tSt 是马尔可夫的,当且仅当它包含历史中的所有有用信息: P(St+1∣St)=P(St+1∣S1,…,St)P(S_{t+1}...
SDSC6015 - Question of Assignment 1
#assignment #sdsc6015 作业初稿 SDSC6015 - Assignment 1 Problem 1: Jensen’s inequality Let f be convex, x1,…,xm∈dom(f),λ1,…,λm∈R+x_{1},\ldots, x_{m}\in\operatorname{dom}(f),\lambda_{1},\ldots,\lambda_{m}\in R_{+}x1,…,xm∈dom(f),λ1,…,λm∈R+ such that ∑i=1mλi=1\sum_{i=1}^{m}\lambda_{i}=1∑i=1mλi=1 . Show that f(∑i=1mλixi)≤∑i=1mλif(xi)f\left( \sum_{i=1}^m \lambda_i x_i \right) \leq \sum_{i=1}^m \lambda_i f(x_i) f(i=1∑mλixi)≤i=1∑mλif(xi) Proof. 对于m=2m=2m=2: 令 λ1+λ2=1\lambd...
SDSC6015 - Assignment 1
SDSC6015 - Assignment 1 #assignment #sdsc6015 Problem 1: Jensen’s inequality Let f be convex, x1,…,xm∈dom(f),λ1,…,λm∈R+x_{1},\ldots, x_{m}\in\operatorname{dom}(f),\lambda_{1},\ldots,\lambda_{m}\in R_{+}x1,…,xm∈dom(f),λ1,…,λm∈R+ such that ∑i=1mλi=1\sum_{i=1}^{m}\lambda_{i}=1∑i=1mλi=1 . Show that f(∑i=1mλixi)≤∑i=1mλif(xi)f\left( \sum_{i=1}^m \lambda_i x_i \right) \leq \sum_{i=1}^m \lambda_i f(x_i) f(i=1∑mλixi)≤i=1∑mλif(xi) Proof. For m=2m=2m=2: Let λ1+λ2=1\lambda_1 + \lambda_2 =...
SDSC6007 - Question of Assignment 1
#assignment #sdsc6007
SDSC5003 - Question of Assignment 1
#assignment #sdsc5003 原文 NOTE: The university policy on academic dishonesty and plagiarism (cheating) will be taken very seriously in this course. Everything submitted should be your own writing or coding. You must not let other students copy your work. Discussions of the assignment are okay, e.g. understanding the concepts involved. This assignment is an individual one. Upload your work as a single archive file with name A1-XXXX-YYYY.zip where XXXX is your name and YYYY is your student ID. ...
SDSC5003 - Assignment 1
SDSC5003 - Assignment1 #assignment #sdsc5003 Part I. ER Modelling Part II: Creating Relational Schemas in SQL for Part I SQL scripts 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273-- Create customer tableCREATE TABLE Customer ( cid TEXT PRIMARY KEY NOT NULL, cname TEXT NOT NULL);-- Create company customer tableCREATE TABLE Company ( cid TEXT PRIMARY KEY NOT NULL, street TEXT NOT NULL, ci...
SDSDC5002 - Question of Assignment 1
#assignment #sdsc5002
SDSC5001 - Assignment 1
SDSC5001 - Assignment 1 #assignment #sdsc5001 1. For each of parts(a) through(d), indicate whether we would generally expect the performance of a flexible statistical learning method to be better or worse than an inflexible method. Justify your answer. (a) The sample size n is extremely large, and the number of predictors p is small. Better. With a large amount of data, complex models (flexible methods) have a lower risk of overfitting and can better learn the underlying patterns. (b) The nu...
