• Title/Summary/Keyword: L-polynomial

Search Result 226, Processing Time 0.034 seconds

POLYNOMIAL FACTORIZATION THROUGH Lγ(μ) SPACES

  • Cilia, Raffaella;Gutierrez, Joaquin M.
    • Journal of the Korean Mathematical Society
    • /
    • v.46 no.6
    • /
    • pp.1293-1307
    • /
    • 2009
  • We give conditions so that a polynomial be factorable through an $L_{\gamma}({\mu})$ space. Among them, we prove that, given a Banach space X and an index m, every absolutely summing operator on X is 1-factorable if and only if every 1-dominated m-homogeneous polynomial on X is right 1-factorable, if and only if every 1-dominated m-homogeneous polynomial on X is left 1-factorable. As a consequence, if X has local unconditional structure, then every 1-dominated homogeneous polynomial on X is right and left 1-factorable.

Design of Space Search-Optimized Polynomial Neural Networks with the Aid of Ranking Selection and L2-norm Regularization

  • Wang, Dan;Oh, Sung-Kwun;Kim, Eun-Hu
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.4
    • /
    • pp.1724-1731
    • /
    • 2018
  • The conventional polynomial neural network (PNN) is a classical flexible neural structure and self-organizing network, however it is not free from the limitation of overfitting problem. In this study, we propose a space search-optimized polynomial neural network (ssPNN) structure to alleviate this problem. Ranking selection is realized by means of ranking selection-based performance index (RS_PI) which is combined with conventional performance index (PI) and coefficients based performance index (CPI) (viz. the sum of squared coefficient). Unlike the conventional PNN, L2-norm regularization method for estimating the polynomial coefficients is also used when designing the ssPNN. Furthermore, space search optimization (SSO) is exploited here to optimize the parameters of ssPNN (viz. the number of input variables, which variables will be selected as input variables, and the type of polynomial). Experimental results show that the proposed ranking selection-based polynomial neural network gives rise to better performance in comparison with the neuron fuzzy models reported in the literatures.

The Polynomial Numerical Index of Lp(μ)

  • Kim, Sung Guen
    • Kyungpook Mathematical Journal
    • /
    • v.53 no.1
    • /
    • pp.117-124
    • /
    • 2013
  • We show that for 1 < $p$ < ${\infty}$, $k$, $m{\in}\mathbb{N}$, $n^{(k)}(l_p)=inf\{n^{(k)}(l^m_p):m{\in}\mathbb{N}\}$ and that for any positive measure ${\mu}$, $n^{(k)}(L_p({\mu})){\geq}n^{(k)}(l_p)$. We also prove that for every $Q{\in}P(^kl_p:l_p)$ (1 < $p$ < ${\infty}$), if $v(Q)=0$, then ${\parallel}Q{\parallel}=0$.

An Efficient Rectification Algorithm for Spaceborne SAR Imagery Using Polynomial Model

  • Kim, Man-Jo
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.5
    • /
    • pp.363-370
    • /
    • 2003
  • This paper describes a rectification procedure that relies on a polynomial model derived from the imaging geometry without loss of accuracy. By using polynomial model, one can effectively eliminate the iterative process to find an image pixel corresponding to each output grid point. With the imaging geometry and ephemeris data, a geo-location polynomial can be constructed from grid points that are produced by solving three equations simultaneously. And, in order to correct the local distortions induced by the geometry and terrain height, a distortion model has been incorporated in the procedure, which is a function of incidence angle and height at each pixel position. With this function, it is straightforward to calculate the pixel displacement due to distortions and then pixels are assigned to the output grid by re-sampling the displaced pixels. Most of the necessary information for the construction of polynomial model is available in the leader file and some can be derived from others. For validation, sample images of ERS-l PRI and Radarsat-l SGF have been processed by the proposed method and evaluated against ground truth acquired from 1:25,000 topography maps.

A RECURSIVE FORMULA FOR THE JONES POLYNOMIAL OF 2-BRIDGE LINKS AND APPLICATIONS

  • Lee, Eun-Ju;Lee, Sang-Youl;Seo, Myoung-Soo
    • Journal of the Korean Mathematical Society
    • /
    • v.46 no.5
    • /
    • pp.919-947
    • /
    • 2009
  • In this paper, we give a recursive formula for the Jones polynomial of a 2-bridge knot or link with Conway normal form C($-2n_1$, $2n_2$, $-2n_3$, ..., $(-1)_r2n_r$) in terms of $n_1$, $n_2$, ..., $n_r$. As applications, we also give a recursive formula for the Jones polynomial of a 3-periodic link $L^{(3)}$ with rational quotient L = C(2, $n_1$, -2, $n_2$, ..., $n_r$, $(-1)^r2$) for any nonzero integers $n_1$, $n_2$, ..., $n_r$ and give a formula for the span of the Jones polynomial of $L^{(3)}$ in terms of $n_1$, $n_2$, ..., $n_r$ with $n_i{\neq}{\pm}1$ for all i=1, 2, ..., r.

VALUE DISTRIBUTIONS OF L-FUNCTIONS CONCERNING POLYNOMIAL SHARING

  • Mandal, Nintu
    • Communications of the Korean Mathematical Society
    • /
    • v.36 no.4
    • /
    • pp.729-741
    • /
    • 2021
  • We mainly study the value distributions of L-functions in the extended selberg class. Concerning weighted sharing, we prove an uniqueness theorem when certain differential monomial of a meromorphic function share a polynomial with certain differential monomial of an L-function which improve and generalize some recent results due to Liu, Li and Yi [11], Hao and Chen [3] and Mandal and Datta [12].

SLOWLY CHANGING FUNCTION ORIENTED GROWTH MEASUREMENT OF DIFFERENTIAL POLYNOMIAL AND DIFFERENTIAL MONOMIAL

  • Biswas, Tanmay
    • Korean Journal of Mathematics
    • /
    • v.27 no.1
    • /
    • pp.17-51
    • /
    • 2019
  • In the paper we establish some new results depending on the comparative growth properties of composite entire and meromorphic functions using relative $_pL^*$-order, relative $_pL^*$-lower order and differential monomials, differential polynomials generated by one of the factors.

Fitting a Piecewise-quadratic Polynomial Curve to Points in the Plane (평면상의 점들에 대한 조각적 이차 다항식 곡선 맞추기)

  • Kim, Jae-Hoon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.1
    • /
    • pp.21-25
    • /
    • 2009
  • In this paper, we study the problem to fit a piecewise-quadratic polynomial curve to points in the plane. The curve consists of quadratic polynomial segments and two points are connected by a segment. But it passes through a subset of points, and for the points not to be passed, the error between the curve and the points is estimated in $L^{\infty}$ metric. We consider two optimization problems for the above problem. One is to reduce the number of segments of the curve, given the allowed error, and the other is to reduce the error between the curve and the points, while the curve has the number of segments less than or equal to the given integer. For the number n of given points, we propose $O(n^2)$ algorithm for the former problem and $O(n^3)$ algorithm for the latter.

A Study on Polynomial Neural Networks for Stabilized Deep Networks Structure (안정화된 딥 네트워크 구조를 위한 다항식 신경회로망의 연구)

  • Jeon, Pil-Han;Kim, Eun-Hu;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1772-1781
    • /
    • 2017
  • In this study, the design methodology for alleviating the overfitting problem of Polynomial Neural Networks(PNN) is realized with the aid of two kinds techniques such as L2 regularization and Sum of Squared Coefficients (SSC). The PNN is widely used as a kind of mathematical modeling methods such as the identification of linear system by input/output data and the regression analysis modeling method for prediction problem. PNN is an algorithm that obtains preferred network structure by generating consecutive layers as well as nodes by using a multivariate polynomial subexpression. It has much fewer nodes and more flexible adaptability than existing neural network algorithms. However, such algorithms lead to overfitting problems due to noise sensitivity as well as excessive trainning while generation of successive network layers. To alleviate such overfitting problem and also effectively design its ensuing deep network structure, two techniques are introduced. That is we use the two techniques of both SSC(Sum of Squared Coefficients) and $L_2$ regularization for consecutive generation of each layer's nodes as well as each layer in order to construct the deep PNN structure. The technique of $L_2$ regularization is used for the minimum coefficient estimation by adding penalty term to cost function. $L_2$ regularization is a kind of representative methods of reducing the influence of noise by flattening the solution space and also lessening coefficient size. The technique for the SSC is implemented for the minimization of Sum of Squared Coefficients of polynomial instead of using the square of errors. In the sequel, the overfitting problem of the deep PNN structure is stabilized by the proposed method. This study leads to the possibility of deep network structure design as well as big data processing and also the superiority of the network performance through experiments is shown.