SDSC6015 - Question of Assignment 2
发表于
|浏览量:
#assignment #sdsc6015
文章作者: Eric_Chen
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来源 迷麟の小站!
相关推荐
2025-09-28
SDSC6015 - Assignment 1
SDSC6015 - Assignment 1 #assignment #sdsc6015 Problem 1: Jensen’s inequality Let f be convex, x1,…,xm∈dom(f),λ1,…,λm∈R+x_{1},\ldots, x_{m}\in\operatorname{dom}(f),\lambda_{1},\ldots,\lambda_{m}\in R_{+}x1,…,xm∈dom(f),λ1,…,λm∈R+ such that ∑i=1mλi=1\sum_{i=1}^{m}\lambda_{i}=1∑i=1mλi=1 . Show that f(∑i=1mλixi)≤∑i=1mλif(xi)f\left( \sum_{i=1}^m \lambda_i x_i \right) \leq \sum_{i=1}^m \lambda_i f(x_i) f(i=1∑mλixi)≤i=1∑mλif(xi) Proof. For m=2m=2m=2: Let λ1+λ2=1\lambda_1 + \lambda_2 =...
2025-10-26
SDSC6015 - Assignment 2
#assignment #sdsc6015 题目链接SDSC6015 - Question of Assignment 2 Problem 1[10 marks] Prove that if the function f:Rd→Rf: \mathbb{R}^{d}\rightarrow \mathbb{R}f:Rd→R has a subgradient at every point in its domain, then fff is convex. Solution: Let x,y∈Rdx, y \in \mathbb{R}^dx,y∈Rd, λ∈[0,1]\lambda \in [0,1]λ∈[0,1], and define z=λx+(1−λ)yz = \lambda x + (1-\lambda)yz=λx+(1−λ)y. Since a subgradient exists at every point, for any gz∈∂f(z)g_z \in \partial f(z)gz∈∂f(z), we have: f(x)≥f(z)+⟨gz,x−z⟩,f(y...
2025-09-29
SDSC6015 - Question of Assignment 1
#assignment #sdsc6015 作业初稿 SDSC6015 - Assignment 1 Problem 1: Jensen’s inequality Let f be convex, x1,…,xm∈dom(f),λ1,…,λm∈R+x_{1},\ldots, x_{m}\in\operatorname{dom}(f),\lambda_{1},\ldots,\lambda_{m}\in R_{+}x1,…,xm∈dom(f),λ1,…,λm∈R+ such that ∑i=1mλi=1\sum_{i=1}^{m}\lambda_{i}=1∑i=1mλi=1 . Show that f(∑i=1mλixi)≤∑i=1mλif(xi)f\left( \sum_{i=1}^m \lambda_i x_i \right) \leq \sum_{i=1}^m \lambda_i f(x_i) f(i=1∑mλixi)≤i=1∑mλif(xi) Proof. 对于m=2m=2m=2: 令 λ1+λ2=1\lambd...
2025-09-11
SDSC6015 Course 1-Introduction / Preliminaries of Stochastic Optimization
#sdsc6015 English / Chinese Course Introduction and Preliminary Stochastic Optimization Main Problem Given labeled training data (x1,y1),…,(xn,yn)∈Rd×Y(x_1, y_1), \dots, (x_n, y_n) \in \mathbb{R}^d \times \mathcal{Y}(x1,y1),…,(xn,yn)∈Rd×Y, find weights θ\thetaθ to minimize: minθf(θ)=1n∑i=1nℓ(θ,(xi,yi)),n extremely large\min_{\theta} f(\theta) = \frac{1}{n} \sum_{i=1}^{n} \ell(\theta, (x_i, y_i)), \quad n \text{ extremely large} θminf(θ)=n1i=1∑nℓ(θ,(xi,yi)),n extremely large Object...
2025-09-11
SDSC6015 Course 2-Gradient Descent Method and Subgradient Method
#sdsc6015 English / Chinese Review - Convex Functions and Convex Optimization Review Definition of Convex Functions Review A function f:Rd→Rf: \mathbb{R}^d \to \mathbb{R}f:Rd→R is convex if and only if: Its domain dom(f)\text{dom}(f)dom(f) is a convex set; For all x,y∈dom(f)\mathbf{x}, \mathbf{y} \in \text{dom}(f)x,y∈dom(f) and λ∈[0,1]\lambda \in [0,1]λ∈[0,1], it satisfies: f(λx+(1−λ)y)≤λf(x)+(1−λ)f(y)f(\lambda \mathbf{x} + (1-\lambda)\mathbf{y}) \leq \lambda f(\mathbf{x}) + (1-\lam...
2025-09-22
SDSC6015 Course 3-Faster Gradient Descent and Subgradient Descent
#sdsc6015 English / 中文 Review Click to expand Convex Optimization Problems The general form of a convex optimization problem is: minx∈Rdf(x)\min_{x \in \mathbb{R}^d} f(x) x∈Rdminf(x) where fff is a convex function, Rd\mathbb{R}^dRd is a convex set, and x∗x^*x∗ is its minimizer: x∗=argminx∈Rdf(x)x^* = \arg\min_{x \in \mathbb{R}^d} f(x) x∗=argx∈Rdminf(x) The update rule for Gradient Descent (GD) is: xk+1=xk−ηk+1∇f(xk)x_{k+1} = x_k - \eta_{k+1} \nabla f(x_k) xk+1=xk−ηk+1∇f(xk) xkx_kx...
