InTech uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Mathematics » "Linear Algebra - Theorems and Applications", book edited by Hassan Abid Yasser, ISBN 978-953-51-0669-2, Published: July 11, 2012 under CC BY 3.0 license. © The Author(s).

# Recent Research on Jensen's Inequality for Oparators

By Jadranka Mićić and Josip Pečarić
DOI: 10.5772/48468

Article top

# Recent Research on Jensen's Inequality for Oparators

## 1. Introduction

The self-adjoint operators on Hilbert spaces with their numerous applications play an important part in the operator theory. The bounds research for self-adjoint operators is a very useful area of this theory. There is no better inequality in bounds examination than Jensen's inequality. It is an extensively used inequality in various fields of mathematics.

Let $I$ be a real interval of any type. A continuous function $f:I\to ℝ$ is said to be operator convex if

 $f\left(\lambda x+\left(1-\lambda \right)y\right)\le \lambda f\left(x\right)+\left(1-\lambda \right)f\left(y\right)$ ()

holds for each $\lambda \in \left[0,1\right]$ and every pair of self-adjoint operators $x$ and $y$ (acting) on an infinite dimensional Hilbert space $H$ with spectra in $I$ (the ordering is defined by setting $x\le y$ if $y-x$ is positive semi-definite).

Let $f$ be an operator convex function defined on an interval $I.$ Ch. Davis [1] provedThere is small typo in the proof. Davis states that $\phi$ by Stinespring's theorem can be written on the form $\phi \left(x\right)=P\rho \left(x\right)P$ where $\rho$ is a $*$-homomorphism to $B\left(H\right)$ and $P$ is a projection on $H.$ In fact, $H$ may be embedded in a Hilbert space $K$ on which $\rho$ and $P$ acts. The theorem then follows by the calculation $f\left(\phi \left(x\right)\right)=f\left(P\rho \left(x\right)P\right)\le Pf\left(\rho \left(x\right)\right)P=P\rho \left(f\left(x\right)P=\phi \left(f\left(x\right)\right),$ where the pinching inequality, proved by Davis in the same paper, is applied. a Schwarz inequality

 $f\left(\phi \left(x\right)\right)\le \phi \left(f\left(x\right)\right)$ ()

where $\phi :𝒜\to B\left(K\right)$ is a unital completely positive linear mapping from a ${C}^{*}$-algebra $𝒜$ to linear operators on a Hilbert space $K,$ and $x$ is a self-adjoint element in $𝒜$ with spectrum in $I.$ Subsequently M. D. Choi [2] noted that it is enough to assume that $\phi$ is unital and positive. In fact, the restriction of $\phi$ to the commutative ${C}^{*}$-algebra generated by $x$ is automatically completely positive by a theorem of Stinespring.

F. Hansen and G. K. Pedersen [3] proved a Jensen type inequality

 $f\left(\sum _{i=1}^{n}{a}_{i}^{*}{x}_{i}{a}_{i}\right)\le \sum _{i=1}^{n}{a}_{i}^{*}f\left({x}_{i}\right){a}_{i}$ ()

for operator convex functions $f$ defined on an interval $I=\left[0,\alpha \right)$ (with $\alpha \le \infty$ and $f\left(0\right)\le 0\right)$ and self-adjoint operators ${x}_{1},\cdots ,{x}_{n}$ with spectra in $I$ assuming that ${\sum }_{i=1}^{n}{a}_{i}^{*}{a}_{i}=\mathbf{1}.$ The restriction on the interval and the requirement $f\left(0\right)\le 0$ was subsequently removed by B. Mond and J. Pečarić in [4], cf. also [5].

The inequality () is in fact just a reformulation of () although this was not noticed at the time. It is nevertheless important to note that the proof given in [3] and thus the statement of the theorem, when restricted to $n×n$ matrices, holds for the much richer class of $2n×2n$ matrix convex functions. Hansen and Pedersen used () to obtain elementary operations on functions, which leave invariant the class of operator monotone functions. These results then served as the basis for a new proof of Löwner's theorem applying convexity theory and Krein-Milman's theorem.

B. Mond and J. Pečarić [6] proved the inequality

 $f\left(\sum _{i=1}^{n}{w}_{i}{\phi }_{i}\left({x}_{i}\right)\right)\le \sum _{i=1}^{n}{w}_{i}{\phi }_{i}\left(f\left({x}_{i}\right)\right)$ ()

for operator convex functions $f$ defined on an interval $I,$ where ${\phi }_{i}:B\left(H\right)\to B\left(K\right)$ are unital positive linear mappings, ${x}_{1},\cdots ,{x}_{n}$ are self-adjoint operators with spectra in $I$ and ${w}_{1},\cdots ,{w}_{n}$ are are non-negative real numbers with sum one.

Also, B. Mond, J. Pečarić, T. Furuta et al. [6], [7], [8], [9], [10], [11] observed conversed of some special case of Jensen's inequality. So in [10] presented the following generalized converse of a Schwarz inequality ()

 $F\left[\phi \left(f\left(A\right)\right),g\left(\phi \left(A\right)\right)\right]\le \underset{m\le t\le M}{max}F\left[f\left(m\right)+\frac{f\left(M\right)-f\left(m\right)}{M-m}\left(t-m\right),g\left(t\right)\right]{1}_{\stackrel{˜}{n}}$ ()

for convex functions $f$ defined on an interval $\left[m,M\right]$, $m, where $g$ is a real valued continuous function on $\left[m,M\right]$, $F\left(u,v\right)$ is a real valued function defined on $U×V$, matrix non-decreasing in $u$, $U\supset f\left[m,M\right]$, $V\supset g\left[m,M\right]$, $\phi :{H}_{n}\to {H}_{\stackrel{˜}{n}}$ is a unital positive linear mapping and $A$ is a Hermitian matrix with spectrum contained in $\left[m,M\right]$.

There are a lot of new research on the classical Jensen inequality () and its reverse inequalities. For example, J.I. Fujii et all. in [12], [13] expressed these inequalities by externally dividing points.

## 2. Classic results

In this section we present a form of Jensen's inequality which contains (), () and () as special cases. Since the inequality in () was the motivating step for obtaining converses of Jensen's inequality using the so-called Mond-Pečarić method, we also give some results pertaining to converse inequalities in the new formulation.

We recall some definitions. Let $T$ be a locally compact Hausdorff space and let $𝒜$ be a ${C}^{*}$-algebra of operators on some Hilbert space $H.$ We say that a field ${\left({x}_{t}\right)}_{t\in T}$ of operators in $𝒜$ is continuous if the function $t↦{x}_{t}$ is norm continuous on $T.$ If in addition $\mu$ is a Radon measure on $T$ and the function $t↦\parallel {x}_{t}\parallel$ is integrable, then we can form the Bochner integral ${\int }_{T}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$, which is the unique element in $𝒜$ such that

 $\varphi \left({\int }_{T}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)={\int }_{T}\varphi \left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ ()

for every linear functional $\varphi$ in the norm dual ${𝒜}^{*}$.

Assume furthermore that there is a field ${\left({\phi }_{t}\right)}_{t\in T}$ of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$ from $𝒜$ to another ${𝒞}^{*}$-algebra $ℬ$ of operators on a Hilbert space $K$. We recall that a linear mapping ${\phi }_{t}:𝒜\to ℬ$ is said to be a positive mapping if ${\phi }_{t}\left({x}_{t}\right)\ge 0$ for all ${x}_{t}\ge 0$. We say that such a field is continuous if the function $t↦{\phi }_{t}\left(x\right)$ is continuous for every $x\in 𝒜.$ Let the ${𝒞}^{*}$-algebras include the identity operators and the function $t↦{\phi }_{t}\left({1}_{H}\right)$ be integrable with ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)=k{1}_{K}$ for some positive scalar $k$. Specially, if ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)={1}_{K},$ we say that a field ${\left({\phi }_{t}\right)}_{t\in T}$ is unital.

Let $B\left(H\right)$ be the ${C}^{*}$-algebra of all bounded linear operators on a Hilbert space $H$. We define bounds of an operator $x\in B\left(H\right)$ by

 ${m}_{x}=\underset{\parallel \xi \parallel =1}{inf}〈x\xi ,\xi 〉\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{M}_{x}=\underset{\parallel \xi \parallel =1}{sup}〈x\xi ,\xi 〉$ ()

for $\xi \in H$. If $\mathrm{𝖲𝗉}\left(x\right)$ denotes the spectrum of $x$, then $\mathrm{𝖲𝗉}\left(x\right)\subseteq \left[{m}_{x},{M}_{x}\right]$.

For an operator $x\in B\left(H\right)$ we define operators $|x|$, ${x}^{+}$, ${x}^{-}$ by

 $|x|={\left({x}^{*}x\right)}^{1/2},\phantom{\rule{2.em}{0ex}}{x}^{+}=\left(|x|+x\right)/2,\phantom{\rule{2.em}{0ex}}{x}^{-}=\left(|x|-x\right)/2$ ()

Obviously, if $x$ is self-adjoint, then $|x|={\left({x}^{2}\right)}^{1/2}$ and ${x}^{+},{x}^{-}\ge 0$ (called positive and negative parts of $x={x}^{+}-{x}^{-}$).

### 2.1. Jensen's inequality with operator convexity

Firstly, we give a general formulation of Jensen's operator inequality for a unital field of positive linear mappings (see [14]).

Theorem 1 Let $f:I\to ℝ$ be an operator convex function defined on an interval $I$ and let $𝒜$ and $ℬ$ be unital ${C}^{*}$-algebras acting on a Hilbert space $H$ and $K$ respectively. If ${\left({\phi }_{t}\right)}_{t\in T}$ is a unital field of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$ defined on a locally compact Hausdorff space $T$ with a bounded Radon measure $\mu ,$ then the inequality

 $f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\le {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ ()

holds for every bounded continuous field ${\left({x}_{t}\right)}_{t\in T}$ of self-adjoint elements in $𝒜$ with spectra contained in $I.$

We first note that the function $t↦{\phi }_{t}\left({x}_{t}\right)\in ℬ$ is continuous and bounded, hence integrable with respect to the bounded Radon measure $\mu .$ Furthermore, the integral is an element in the multiplier algebra $M\left(ℬ\right)$ acting on $K.$ We may organize the set $CB\left(T,𝒜\right)$ of bounded continuous functions on $T$ with values in $𝒜$ as a normed involutive algebra by applying the point-wise operations and setting

 $\parallel {\left({y}_{t}\right)}_{t\in T}\parallel =\underset{t\in T}{sup}\parallel {y}_{t}\parallel \phantom{\rule{2.em}{0ex}}{\left({y}_{t}\right)}_{t\in T}\in CB\left(T,𝒜\right)$ ()

and it is not difficult to verify that the norm is already complete and satisfy the ${C}^{*}$-identity. In fact, this is a standard construction in ${C}^{*}$-algebra theory. It follows that $f\left({\left({x}_{t}\right)}_{t\in T}\right)={\left(f\left({x}_{t}\right)\right)}_{t\in T}$. We then consider the mapping

 $\pi :CB\left(T,𝒜\right)\to M\left(ℬ\right)\subseteq B\left(K\right)$ ()

defined by setting

 $\pi \left({\left({x}_{t}\right)}_{t\in T}\right)={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ ()

and note that it is a unital positive linear map. Setting $x={\left({x}_{t}\right)}_{t\in T}\in CB\left(T,𝒜\right),$ we use inequality () to obtain

 $f\left(\pi \left({\left({x}_{t}\right)}_{t\in T}\right)\right)=f\left(\pi \left(x\right)\right)\le \pi \left(f\left(x\right)\right)=\pi \left(f\left({\left({x}_{t}\right)}_{t\in T}\right)\right)=\pi \left({\left(f\left({x}_{t}\right)\right)}_{t\in T}\right)$ ()

but this is just the statement of the theorem.

### 2.2. Converses of Jensen's inequality

In the present context we may obtain results of the Li-Mathias type cf. Chapter 3[15] and [16], [17].

Theorem 2 Let $T$ be a locally compact Hausdorff space equipped with a bounded Radon measure $\mu$. Let ${\left({x}_{t}\right)}_{t\in T}$ be a bounded continuous field of self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ with spectra in $\left[m,M\right]$, $m. Furthermore, let ${\left({\phi }_{t}\right)}_{t\in T}$ be a field of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$ from $𝒜$ to another unital ${C}^{*}-$algebra $ℬ$, such that the function $t↦{\phi }_{t}\left({1}_{H}\right)$ is integrable with ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)=k{1}_{K}$ for some positive scalar $k$. Let ${m}_{x}$ and ${M}_{x}$, ${m}_{x}\le {M}_{x}$, be the bounds of the self-adjoint operator $x={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ and $f:\left[m,M\right]\to ℝ$, $g:\left[{m}_{x},{M}_{x}\right]\to ℝ$, $F:U×V\to ℝ$ be functions such that $\left(kf\right)\left(\left[m,M\right]\right)\subset U,$ $g\left(\left[{m}_{x},{M}_{x}\right]\right)\subset V$ and $F$ is bounded. If $F$ is operator monotone in the first variable, then

 $\begin{array}{c}\underset{{m}_{x}\le z\le {M}_{x}}{inf}F\left[k·{h}_{1}\left(\frac{1}{k}z\right),g\left(z\right)\right]{1}_{K}\le F\left[{\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\right]\\ \le \underset{{m}_{x}\le z\le {M}_{x}}{sup}F\left[k·{h}_{2}\left(\frac{1}{k}z\right),g\left(z\right)\right]{1}_{K}\end{array}$ ()

holds for every operator convex function ${h}_{1}$ on $\left[m,M\right]$ such that ${h}_{1}\le f$ and for every operator concave function ${h}_{2}$ on $\left[m,M\right]$ such that ${h}_{2}\ge f$.

We prove only RHS of (). Let ${h}_{2}$ be operator concave function on $\left[m,M\right]$ such that $f\left(z\right)\le {h}_{2}\left(z\right)$ for every $z\in \left[m,M\right]$. By using the functional calculus, it follows that $f\left({x}_{t}\right)\le {h}_{2}\left({x}_{t}\right)$ for every $t\in T$. Applying the positive linear mappings ${\phi }_{t}$ and integrating, we obtain

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right)\le {\int }_{T}{\phi }_{t}\left({h}_{2}\left({x}_{t}\right)\right)d\mu \left(t\right)$ ()

Furthermore, replacing ${\phi }_{t}$ by $\frac{1}{k}{\phi }_{t}$ in Theorem , we obtain $\frac{1}{k}{\int }_{T}{\phi }_{t}\left({h}_{2}\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\le {h}_{2}\left(\frac{1}{k}{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)$, which gives ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right)\le k·{h}_{2}\left(\frac{1}{k}{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)$. Since ${m}_{x}\phantom{\rule{0.166667em}{0ex}}{1}_{K}\le {\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\le {M}_{x}\phantom{\rule{0.166667em}{0ex}}{1}_{K}$, then using operator monotonicity of $F\left(·,v\right)$ we obtain

 $\begin{array}{cc}& F\left[{\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\right]\\ & \le F\left[k·{h}_{2}\left(\frac{1}{k}{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\right]\le \underset{{m}_{x}\le z\le {M}_{x}}{sup}F\left[k·{h}_{2}\left(\frac{1}{k}z\right),g\left(z\right)\right]{1}_{K}\end{array}$ ()

Applying RHS of () for a convex function $f$ (or LHS of () for a concave function $f$) we obtain the following generalization of ().

Theorem 3 Let ${\left({x}_{t}\right)}_{t\in T}$, ${m}_{x}$, ${M}_{x}$ and ${\left({\phi }_{t}\right)}_{t\in T}$ be as in Theorem . Let $f:\left[m,M\right]\to ℝ$, $g:\left[{m}_{x},{M}_{x}\right]\to ℝ$, $F:U×V\to ℝ$ be functions such that $\left(kf\right)\left(\left[m,M\right]\right)\subset U,$ $g\left(\left[{m}_{x},{M}_{x}\right]\right)\subset V$ and $F$ is bounded. If $F$ is operator monotone in the first variable and $f$ is convex on the interval $\left[m,M\right]$, then

 $\begin{array}{c}F\left[{\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\right]\\ \le \underset{{m}_{x}\le z\le {M}_{x}}{sup}F\left[\frac{Mk-z}{M-m}f\left(m\right)+\frac{z-km}{M-m}f\left(M\right),g\left(z\right)\right]{1}_{K}\end{array}$ ()

In the dual case (when $f$ is concave) the opposite inequalities hold in () with $inf$ instead of $sup$.

We prove only the convex case. For convex $f$ the inequality $f\left(z\right)\le \frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right)$ holds for every $z\in \left[m,M\right]$. Thus, by putting ${h}_{2}\left(z\right)=\frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right)$ in () we obtain (). Numerous applications of the previous theorem can be given (see [15]). Applying Theorem  for the function $F\left(u,v\right)=u-\alpha v$ and $k=1$, we obtain the following generalization of Theorem 2.4[15].

Corollary 4 Let ${\left({x}_{t}\right)}_{t\in T}$, ${m}_{x}$, ${M}_{x}$ be as in Theorem  and ${\left({\phi }_{t}\right)}_{t\in T}$ be a unital field of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$. If $f:\left[m,M\right]\to ℝ$ is convex on the interval $\left[m,M\right]$, $m, and $g:\left[m,M\right]\to ℝ$, then for any $\alpha \in ℝ$

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right)\le \alpha \phantom{\rule{0.277778em}{0ex}}g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)+C{1}_{K}$ ()

where

 $\begin{array}{ccc}\hfill C& =& \underset{{m}_{x}\le z\le {M}_{x}}{max}\left\{\frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right)-\alpha g\left(z\right)\right\}\hfill \\ & \le & \underset{m\le z\le M}{max}\left\{\frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right)-\alpha g\left(z\right)\right\}\hfill \end{array}$ ()

If furthermore $\alpha g$ is strictly convex differentiable, then the constant $C\equiv C\left(m,M,f,g,\alpha \right)$ can be written more precisely as

 $C=\frac{M-{z}_{0}}{M-m}f\left(m\right)+\frac{{z}_{0}-m}{M-m}f\left(M\right)-\alpha g\left({z}_{0}\right)$ ()

where

 ${z}_{0}=\left\{\begin{array}{cc}{g}^{\text{'}-1}\left(\frac{f\left(M\right)-f\left(m\right)}{\alpha \left(M-m\right)}\right)\hfill & \phantom{\rule{1.em}{0ex}}\text{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\alpha {g}^{\text{'}}\left({m}_{x}\right)\le \frac{f\left(M\right)-f\left(m\right)}{M-m}\le \alpha {g}^{\text{'}}\left({M}_{x}\right)\hfill \\ {m}_{x}\hfill & \phantom{\rule{1.em}{0ex}}\text{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\alpha {g}^{\text{'}}\left({m}_{x}\right)\ge \frac{f\left(M\right)-f\left(m\right)}{M-m}\hfill \\ {M}_{x}\hfill & \phantom{\rule{1.em}{0ex}}\text{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\alpha {g}^{\text{'}}\left({M}_{x}\right)\le \frac{f\left(M\right)-f\left(m\right)}{M-m}\hfill \end{array}\right\$ ()

In the dual case (when $f$ is concave and $\alpha g$ is strictly concave differentiable) the opposite inequalities hold in () with $min$ instead of $max$ with the opposite condition while determining ${z}_{0}$.

## 3. Inequalities with conditions on spectra

In this section we present Jensens's operator inequality for real valued continuous convex functions with conditions on the spectra of the operators. A discrete version of this result is given in [18]. Also, we obtain generalized converses of Jensen's inequality under the same conditions.

Operator convexity plays an essential role in (). In fact, the inequality () will be false if we replace an operator convex function by a general convex function. For example, M.D. Choi in Remark 2.6[2] considered the function $f\left(t\right)={t}^{4}$ which is convex but not operator convex. He demonstrated that it is sufficient to put $\mathrm{dim}H=3$, so we have the matrix case as follows. Let $\Phi :{M}_{3}\left(ℂ\right)\to {M}_{2}\left(ℂ\right)$ be the contraction mapping $\Phi \left({\left({a}_{ij}\right)}_{1\le i,j\le 3}\right)={\left({a}_{ij}\right)}_{1\le i,j\le 2}$. If $A=\left(\begin{array}{ccc}1& 0& 1\\ 0& 0& 1\\ 1& 1& 1\end{array}\right),$ then $\Phi {\left(A\right)}^{4}=\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)¬\le \left(\begin{array}{cc}9& 5\\ 5& 3\end{array}\right)=\Phi \left({A}^{4}\right)$ and no relation between $\Phi {\left(A\right)}^{4}$ and $\Phi \left({A}^{4}\right)$ under the operator order.

Example 5 It appears that the inequality () will be false if we replace the operator convex function by a general convex function. We give a small example for the matrix cases and $T=\left\{1,2\right\}$. We define mappings ${\Phi }_{1},{\Phi }_{2}:{M}_{3}\left(ℂ\right)\to {M}_{2}\left(ℂ\right)$ by ${\Phi }_{1}\left({\left({a}_{ij}\right)}_{1\le i,j\le 3}\right)=\frac{1}{2}{\left({a}_{ij}\right)}_{1\le i,j\le 2}$, ${\Phi }_{2}={\Phi }_{1}$. Then ${\Phi }_{1}\left({I}_{3}\right)+{\Phi }_{2}\left({I}_{3}\right)={I}_{2}$.

##### I)
• If

 ${X}_{1}=2\left(\begin{array}{ccc}1& 0& 1\\ 0& 0& 1\\ 1& 1& 1\end{array}\right)\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{X}_{2}=2\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right)$ ()

then

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=\left(\begin{array}{cc}16& 0\\ 0& 0\end{array}\right)¬\le \left(\begin{array}{cc}80& 40\\ 40& 24\end{array}\right)={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)$ ()

Given the above, there is no relation between ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}$ and ${\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)$ under the operator order. We observe that in the above case the following stands $X={\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)=\left(\begin{array}{cc}2& 0\\ 0& 0\end{array}\right)$ and $\left[{m}_{x},{M}_{x}\right]=\left[0,2\right]$, $\left[{m}_{1},{M}_{1}\right]\subset \left[-1.60388,4.49396\right]$, $\left[{m}_{2},{M}_{2}\right]=\left[0,2\right]$, i.e.

 $\left({m}_{x},{M}_{x}\right)\subset \left[{m}_{1},{M}_{1}\right]\cup \left[{m}_{2},{M}_{2}\right]$ ()

(see Fig. 1.a).

#### Figure 1.

Spectral conditions for a convex function f

##### II)
• If

 ${X}_{1}=\left(\begin{array}{ccc}-14& 0& 1\\ 0& -2& -1\\ 1& -1& -1\end{array}\right)\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{X}_{2}=\left(\begin{array}{ccc}15& 0& 0\\ 0& 2& 0\\ 0& 0& 15\end{array}\right)$ ()

then

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=\left(\begin{array}{cc}\frac{1}{16}& 0\\ 0& 0\end{array}\right)<\left(\begin{array}{cc}89660& -247\\ -247& 51\end{array}\right)={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)$ ()

So we have that an inequality of type () now is valid. In the above case the following stands $X={\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)=\left(\begin{array}{cc}\frac{1}{2}& 0\\ 0& 0\end{array}\right)$ and $\left[{m}_{x},{M}_{x}\right]=\left[0,0.5\right]$, $\left[{m}_{1},{M}_{1}\right]\subset \left[-14.077,-0.328566\right]$, $\left[{m}_{2},{M}_{2}\right]=\left[2,15\right]$, i.e.

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{1},{M}_{1}\right]=\varnothing \phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{2},{M}_{2}\right]=\varnothing$ ()

(see Fig. 1.b).

### 3.1. Jensen's inequality without operator convexity

It is no coincidence that the inequality () is valid in Example -II). In the following theorem we prove a general result when Jensen's operator inequality () holds for convex functions.

Theorem 6 Let ${\left({x}_{t}\right)}_{t\in T}$ be a bounded continuous field of self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ defined on a locally compact Hausdorff space $T$ equipped with a bounded Radon measure $\mu$. Let ${m}_{t}$ and ${M}_{t}$, ${m}_{t}\le {M}_{t}$, be the bounds of ${x}_{t}$, $t\in T$. Let ${\left({\phi }_{t}\right)}_{t\in T}$ be a unital field of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$ from $𝒜$ to another unital ${C}^{*}-$algebra $ℬ$. If

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}t\in T$ ()

where ${m}_{x}$ and ${M}_{x}$, ${m}_{x}\le {M}_{x}$, are the bounds of the self-adjoint operator $x={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$, then

 $f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\le {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ ()

holds for every continuous convex function $f:I\to ℝ$ provided that the interval $I$ contains all ${m}_{t},{M}_{t}$.

If $f:I\to ℝ$ is concave, then the reverse inequality is valid in ().

We prove only the case when $f$ is a convex function. If we denote $m=\underset{t\in T}{inf}\left\{{m}_{t}\right\}$ and $M=\underset{t\in T}{sup}\left\{{M}_{t}\right\}$, then $\left[m,M\right]\subseteq I$ and $m{1}_{H}\le {A}_{t}\le M{1}_{H}$, $t\in T$. It follows $m{1}_{K}\le {\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\le M{1}_{K}$. Therefore $\left[{m}_{x},{M}_{x}\right]\subseteq \left[m,M\right]\subseteq I$.

a) Let ${m}_{x}<{M}_{x}$. Since $f$ is convex on $\left[{m}_{x},{M}_{x}\right]$, then

 $f\left(z\right)\le \frac{{M}_{x}-z}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{z-{m}_{x}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right),\phantom{\rule{1.em}{0ex}}z\in \left[{m}_{x},{M}_{x}\right]$ ()

but since $f$ is convex on $\left[{m}_{t},{M}_{t}\right]$ and since $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing$, then

 $f\left(z\right)\ge \frac{{M}_{x}-z}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{z-{m}_{x}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right),\phantom{\rule{1.em}{0ex}}z\in \left[{m}_{t},{M}_{t}\right],\phantom{\rule{1.em}{0ex}}t\in T$ ()

Since ${m}_{x}{1}_{K}\le {\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\le {M}_{x}{1}_{K},$ then by using functional calculus, it follows from ()

 $f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\le \frac{{M}_{x}{1}_{K}-{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{m}_{x}{1}_{K}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right)$ ()

On the other hand, since ${m}_{t}{1}_{H}\le {x}_{t}\le {M}_{t}{1}_{H}$, $t\in T$, then by using functional calculus, it follows from ()

 $f\left({x}_{t}\right)\ge \frac{{M}_{x}{1}_{H}-{x}_{t}}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{{x}_{t}-{m}_{x}{1}_{H}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right),\phantom{\rule{2.em}{0ex}}t\in T$ ()

Applying a positive linear mapping ${\phi }_{t}$ and summing, we obtain

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\ge \frac{{M}_{x}{1}_{K}-{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{m}_{x}{1}_{K}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right)$ ()

since ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)={1}_{K}$. Combining the two inequalities () and (), we have the desired inequality ().

b) Let ${m}_{x}={M}_{x}$. Since $f$ is convex on $\left[m,M\right]$, we have

 $f\left(z\right)\ge f\left({m}_{x}\right)+l\left({m}_{x}\right)\left(z-{m}_{x}\right)\phantom{\rule{1.em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{every}\phantom{\rule{4.pt}{0ex}}z\in \left[m,M\right]$ ()

where $l$ is the subdifferential of $f$. Since $m{1}_{H}\le {x}_{t}\le M{1}_{H}$, $t\in T$, then by using functional calculus, applying a positive linear mapping ${\phi }_{t}$ and summing, we obtain from ()

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\ge f\left({m}_{x}\right){1}_{K}+l\left({m}_{x}\right)\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{m}_{x}{1}_{K}\right)$ ()

Since ${m}_{x}{1}_{K}={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right),$ it follows

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\ge f\left({m}_{x}\right){1}_{K}=f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)$ ()

which is the desired inequality (). Putting ${\phi }_{t}\left(y\right)={a}_{t}y$ for every $y\in 𝒜$, where ${a}_{t}\ge 0$ is a real number, we obtain the following obvious corollary of Theorem .

Corollary 7 Let ${\left({x}_{t}\right)}_{t\in T}$ be a bounded continuous field of self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ defined on a locally compact Hausdorff space $T$ equipped with a bounded Radon measure $\mu$. Let ${m}_{t}$ and ${M}_{t}$, ${m}_{t}\le {M}_{t}$, be the bounds of ${x}_{t}$, $t\in T$. Let ${\left({a}_{t}\right)}_{t\in T}$ be a continuous field of nonnegative real numbers such that ${\int }_{T}{a}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)=1$. If

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}t\in T$ ()

where ${m}_{x}$ and ${M}_{x}$, ${m}_{x}\le {M}_{x}$, are the bounds of the self-adjoint operator $x={\int }_{T}{a}_{t}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$, then

 $f\left({\int }_{T}{a}_{t}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\le {\int }_{T}{a}_{t}f\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ ()

holds for every continuous convex function $f:I\to ℝ$ provided that the interval $I$ contains all ${m}_{t},{M}_{t}$.

### 3.2. Converses of Jensen's inequality with conditions on spectra

Using the condition on spectra we obtain the following extension of Theorem .

Theorem 8 Let ${\left({x}_{t}\right)}_{t\in T}$ be a bounded continuous field of self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ defined on a locally compact Hausdorff space $T$ equipped with a bounded Radon measure $\mu$. Furthermore, let ${\left({\phi }_{t}\right)}_{t\in T}$ be a field of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$ from $𝒜$ to another unital ${C}^{*}-$algebra $ℬ$, such that the function $t↦{\phi }_{t}\left({1}_{H}\right)$ is integrable with ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)=k{1}_{K}$ for some positive scalar $k$. Let ${m}_{t}$ and ${M}_{t}$, ${m}_{t}\le {M}_{t}$, be the bounds of ${x}_{t}$, $t\in T$, $m=\underset{t\in T}{inf}\left\{{m}_{t}\right\}$, $M=\underset{t\in T}{sup}\left\{{M}_{t}\right\}$, and ${m}_{x}$ and ${M}_{x}$, ${m}_{x}<{M}_{x}$, be the bounds of $x={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$. If

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}t\in T$ ()

and $f:\left[m,M\right]\to ℝ$, $g:\left[{m}_{x},{M}_{x}\right]\to ℝ$, $F:U×V\to ℝ$ are functions such that $\left(kf\right)\left(\left[m,M\right]\right)\subset U,$ $g\left(\left[{m}_{x},{M}_{x}\right]\right)\subset V$, $f$ is convex, $F$ is bounded and operator monotone in the first variable, then

 $\begin{array}{c}\underset{{m}_{x}\le z\le {M}_{x}}{inf}F\left[\frac{{M}_{x}k-z}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{z-k{m}_{x}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right),g\left(z\right)\right]{1}_{K}\\ F\left[{\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\right]\\ \le \underset{{m}_{x}\le z\le {M}_{x}}{sup}F\left[\frac{Mk-z}{M-m}f\left(m\right)+\frac{z-km}{M-m}f\left(M\right),g\left(z\right)\right]{1}_{K}\end{array}$ ()

In the dual case (when $f$ is concave) the opposite inequalities hold in () by replacing $inf$ and $sup$ with $sup$ and $inf$, respectively.

We prove only LHS of (). It follows from () (compare it to ())

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\ge \frac{{M}_{x}k{1}_{K}-{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{m}_{x}k{1}_{K}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right)$ ()

since ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)=k{1}_{K}$. By using operator monotonicity of $F\left(·,v\right)$ we obtain

 $\begin{array}{cc}& F\left[{\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\right]\hfill \end{array}\ge F\left[\frac{{M}_{x}k{1}_{K}-{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{m}_{x}k{1}_{K}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right),g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\right]$ ()

mxzMx F[Mx k-zMx-mxf(mx)+z-kmxMx-mxf(Mx),g(z)] 1K

  ()

Putting $F\left(u,v\right)=u-\alpha v$ or $F\left(u,v\right)={v}^{-1/2}u{v}^{-1/2}$ in Theorem , we obtain the next corollary.

Corollary 9 Let ${\left({x}_{t}\right)}_{t\in T}$, ${m}_{t}$, ${M}_{t}$, ${m}_{x}$, ${M}_{x}$, $m$, $M$, ${\left({\phi }_{t}\right)}_{t\in T}$ be as in Theorem  and $f:\left[m,M\right]\to ℝ$, $g:\left[{m}_{x},{M}_{x}\right]\to ℝ$ be continuous functions. If

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}t\in T$ ()

and $f$ is convex, then for any $\alpha \in ℝ$

 $\begin{array}{c}\underset{{m}_{x}\le z\le {M}_{x}}{min}\left\{\frac{{M}_{x}k-z}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{z-k{m}_{x}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right)-g\left(z\right)\right\}{1}_{K}+\alpha g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\\ \le {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right)\\ \le \alpha g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)+\underset{{m}_{x}\le z\le {M}_{x}}{max}\left\{\frac{Mk-z}{M-m}f\left(m\right)+\frac{z-km}{M-m}f\left(M\right)-g\left(z\right)\right\}{1}_{K}\end{array}$ ()

If additionally $g>0$ on $\left[{m}_{x},{M}_{x}\right]$, then

 $\begin{array}{c}\underset{{m}_{x}\le z\le {M}_{x}}{min}\left\{\frac{\frac{{M}_{x}k-z}{{M}_{x}-{m}_{x}}f\left({m}_{x}\right)+\frac{z-k{m}_{x}}{{M}_{x}-{m}_{x}}f\left({M}_{x}\right)}{g\left(z\right)}\right\}g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\\ \le {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)d\mu \left(t\right)\le \underset{{m}_{x}\le z\le {M}_{x}}{max}\left\{\frac{\frac{Mk-z}{M-m}f\left(m\right)+\frac{z-km}{M-m}f\left(M\right)}{g\left(z\right)}\right\}g\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)d\mu \left(t\right)\right)\end{array}$ ()

In the dual case (when $f$ is concave) the opposite inequalities hold in () by replacing $min$ and $max$ with $max$ and $min$, respectively. If additionally $g>0$ on $\left[{m}_{x},{M}_{x}\right]$, then the opposite inequalities also hold in () by replacing $min$ and $max$ with $max$ and $min$, respectively.

## 4. Refined Jensen's inequality

In this section we present a refinement of Jensen's inequality for real valued continuous convex functions given in Theorem . A discrete version of this result is given in [19].

To obtain our result we need the following two lemmas.

Lemma 10 Let $f$ be a convex function on an interval $I$, $m,M\in I$ and ${p}_{1},{p}_{2}\in \left[0,1\right]$ such that ${p}_{1}+{p}_{2}=1$. Then

 $min\left\{{p}_{1},{p}_{2}\right\}\left[f\left(m\right)+f\left(M\right)-2f\left(\frac{m+M}{2}\right)\right]\le {p}_{1}f\left(m\right)+{p}_{2}f\left(M\right)-f\left({p}_{1}m+{p}_{2}M\right)$ ()

These results follows from Theorem 1, p. 717[20].

Lemma 11 Let $x$ be a bounded self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ of operators on some Hilbert space $H$. If the spectrum of $x$ is in $\left[m,M\right]$, for some scalars $m, then

 $\begin{array}{ccc}\hfill f\left(x\right)& \le & \frac{M{1}_{H}-x}{M-m}f\left(m\right)+\frac{x-m{1}_{H}}{M-m}f\left(M\right)-{\delta }_{f}\stackrel{˜}{x}\hfill \\ \hfill \left(\text{resp.}\phantom{\rule{1.em}{0ex}}f\left(x\right)& \ge & \frac{M{1}_{H}-x}{M-m}f\left(m\right)+\frac{x-m{1}_{H}}{M-m}f\left(M\right)+{\delta }_{f}\stackrel{˜}{x}\phantom{\rule{1.em}{0ex}}\right)\hfill \end{array}$ ()

holds for every continuous convex $\left($resp. concave$\right)$ function $f:\left[m,M\right]\to ℝ$, where

 $\begin{array}{cc}\hfill & {\delta }_{f}=f\left(m\right)+f\left(M\right)-2f\left(\frac{m+M}{2}\right)\phantom{\rule{1.em}{0ex}}\left(\text{resp.}\phantom{\rule{0.277778em}{0ex}}{\delta }_{f}=2f\left(\frac{m+M}{2}\right)-f\left(m\right)-f\left(M\right)\right)\\ & \text{and}\phantom{\rule{1.em}{0ex}}\stackrel{˜}{x}=\frac{1}{2}{1}_{H}-\frac{1}{M-m}\left|x-\frac{m+M}{2}{1}_{H}\right|\end{array}$ ()

We prove only the convex case. It follows from () that

 $\begin{array}{ccc}\hfill f\left({p}_{1}m+{p}_{2}M\right)& \le & {p}_{1}f\left(m\right)+{p}_{2}f\left(M\right)\hfill \\ & -& min\left\{{p}_{1},{p}_{2}\right\}\left(f\left(m\right)+f\left(M\right)-2f\left(\frac{m+M}{2}\right)\right)\hfill \end{array}$ ()

for every ${p}_{1},{p}_{2}\in \left[0,1\right]$ such that ${p}_{1}+{p}_{2}=1$ . For any $z\in \left[m,M\right]$ we can write

 $f\left(z\right)=f\left(\frac{M-z}{M-m}m+\frac{z-m}{M-m}M\right)$ ()

Then by using () for ${p}_{1}=\frac{M-z}{M-m}$ and ${p}_{2}=\frac{z-m}{M-m}$ we obtain

 $\begin{array}{ccc}\hfill f\left(z\right)& \le & \frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right)\hfill \\ & -& \left(\frac{1}{2}-\frac{1}{M-m}\left|z-\frac{m+M}{2}\right|\right)\left(f\left(m\right)+f\left(M\right)-2f\left(\frac{m+M}{2}\right)\right)\hfill \end{array}$ ()

since

 $min\left\{\frac{M-z}{M-m},\frac{z-m}{M-m}\right\}=\frac{1}{2}-\frac{1}{M-m}\left|z-\frac{m+M}{2}\right|$ ()

Finally we use the continuous functional calculus for a self-adjoint operator $x$: $f,g\in 𝒞\left(I\right),Sp\left(x\right)\subseteq I$ and $f\le g$ on $I$ implies $f\left(x\right)\le g\left(x\right)$; and $h\left(z\right)=|z|$ implies $h\left(x\right)=|x|$. Then by using () we obtain the desired inequality ().

Theorem 12 Let ${\left({x}_{t}\right)}_{t\in T}$ be a bounded continuous field of self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ defined on a locally compact Hausdorff space $T$ equipped with a bounded Radon measure $\mu$. Let ${m}_{t}$ and ${M}_{t}$, ${m}_{t}\le {M}_{t}$, be the bounds of ${x}_{t}$, $t\in T$. Let ${\left({\phi }_{t}\right)}_{t\in T}$ be a unital field of positive linear mappings ${\phi }_{t}:𝒜\to ℬ$ from $𝒜$ to another unital ${C}^{*}-$algebra $ℬ$. Let

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing ,\phantom{\rule{1.em}{0ex}}t\in T,\phantom{\rule{2.em}{0ex}}\text{and}\phantom{\rule{2.em}{0ex}}m ()

where ${m}_{x}$ and ${M}_{x}$, ${m}_{x}\le {M}_{x}$, be the bounds of the operator $x={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ and

 $m=sup\left\{{M}_{t}:{M}_{t}\le {m}_{x},t\in T\right\},\phantom{\rule{0.277778em}{0ex}}M=inf\left\{{m}_{t}:{m}_{t}\ge {M}_{x},t\in T\right\}$ ()

If $f:I\to ℝ$ is a continuous convex $\left($resp. concave$\right)$ function provided that the interval $I$ contains all ${m}_{t},{M}_{t}$, then

 $f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\le {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{\delta }_{f}\stackrel{˜}{x}\le {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ ()

$\left($resp.

 $f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\ge {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{\delta }_{f}\stackrel{˜}{x}\ge {\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\phantom{\rule{0.277778em}{0ex}}\right)$ ()

holds, where

 $\begin{array}{ccc}\hfill {\delta }_{f}\equiv {\delta }_{f}\left(\overline{m},\overline{M}\right)& =& f\left(\overline{m}\right)+f\left(\overline{M}\right)-2f\left(\frac{\overline{m}+\overline{M}}{2}\right)\hfill \\ \hfill \left(\text{resp.}\phantom{\rule{1.em}{0ex}}{\delta }_{f}\equiv {\delta }_{f}\left(\overline{m},\overline{M}\right)& =& 2f\left(\frac{\overline{m}+\overline{M}}{2}\right)-f\left(\overline{m}\right)-f\left(\overline{M}\right)\phantom{\rule{0.277778em}{0ex}}\right)\hfill \\ \hfill \stackrel{˜}{x}\equiv {\stackrel{˜}{x}}_{x}\left(\overline{m},\overline{M}\right)& =& \frac{1}{2}{1}_{K}-\frac{1}{\overline{M}-\overline{m}}\left|x-\frac{\overline{m}+\overline{M}}{2}{1}_{K}\right|\hfill \end{array}$ ()

and  $\overline{m}\in \left[m,{m}_{A}\right]$, $\overline{M}\in \left[{M}_{A},M\right]$, $\overline{m}<\overline{M}$,   are arbitrary numbers.

We prove only the convex case. Since $x={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\in ℬ$ is the self-adjoint elements such that $\overline{m}{1}_{K}\le {m}_{x}{1}_{K}\le {\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\le {M}_{x}{1}_{K}\le \overline{M}{1}_{K}$ and $f$ is convex on $\left[\overline{m},\overline{M}\right]\subseteq I$, then by Lemma  we obtain

 $f\left({\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)\le \frac{\overline{M}{1}_{K}-{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-\overline{m}{1}_{K}}{\overline{M}-\overline{m}}f\left(\overline{M}\right)-{\delta }_{f}\stackrel{˜}{x}$ ()

where ${\delta }_{f}$ and $\stackrel{˜}{x}$ are defined by ().

But since $f$ is convex on $\left[{m}_{t},{M}_{t}\right]$ and $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing$ implies $\left(\overline{m},\overline{M}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing$, then

 $f\left({x}_{t}\right)\ge \frac{\overline{M}{1}_{H}-{x}_{t}}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{{x}_{t}-\overline{m}{1}_{H}}{\overline{M}-\overline{m}}f\left(\overline{M}\right),\phantom{\rule{1.em}{0ex}}t\in T$ ()

Applying a positive linear mapping ${\phi }_{t}$, integrating and adding $-{\delta }_{f}\stackrel{˜}{x}$, we obtain

 ${\int }_{T}{\phi }_{t}\left(f\left({x}_{t}\right)\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{\delta }_{f}\stackrel{˜}{x}\ge \frac{\overline{M}{1}_{K}-{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{{\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-\overline{m}{1}_{K}}{\overline{M}-\overline{m}}f\left(\overline{M}\right)-{\delta }_{f}\stackrel{˜}{x}$ ()

since ${\int }_{T}{\phi }_{t}\left({1}_{H}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)={1}_{K}$. Combining the two inequalities () and (), we have LHS of (). Since ${\delta }_{f}\ge 0$ and $\stackrel{˜}{x}\ge 0$, then we have RHS of ().

If $m and ${m}_{x}={M}_{x}$, then the inequality () holds, but ${\delta }_{f}\left({m}_{x},{M}_{x}\right)\phantom{\rule{0.166667em}{0ex}}\stackrel{˜}{x}\left({m}_{x},{M}_{x}\right)$ is not defined (see Example  I) and II)).

Example 13 We give examples for the matrix cases and $T=\left\{1,2\right\}$. Then we have refined inequalities given in Fig. 2.

#### Figure 2.

Refinement for two operators and a convex function f

We put $f\left(t\right)={t}^{4}$ which is convex but not operator convex in (). Also, we define mappings ${\Phi }_{1},{\Phi }_{2}:{M}_{3}\left(ℂ\right)\to {M}_{2}\left(ℂ\right)$ as follows: ${\Phi }_{1}\left({\left({a}_{ij}\right)}_{1\le i,j\le 3}\right)=\frac{1}{2}{\left({a}_{ij}\right)}_{1\le i,j\le 2}$, ${\Phi }_{2}={\Phi }_{1}$ (then ${\Phi }_{1}\left({I}_{3}\right)+{\Phi }_{2}\left({I}_{3}\right)={I}_{2}$).

I) First, we observe an example when ${\delta }_{f}\stackrel{˜}{X}$ is equal to the difference RHS and LHS of Jensen's inequality. If ${X}_{1}=-3{I}_{3}$ and ${X}_{2}=2{I}_{3}$, then $X={\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)=-0.5{I}_{2}$, so $m=-3$, $M=2$. We also put $\overline{m}=-3$ and $\overline{M}=2$. We obtain

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=0.0625{I}_{2}<48.5{I}_{2}={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)$ ()

and its improvement

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=0.0625{I}_{2}={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)-48.4375{I}_{2}$ ()

since ${\delta }_{f}=96.875$, $\stackrel{˜}{X}=0.5{I}_{2}.$ We remark that in this case ${m}_{x}={M}_{x}=-1/2$ and $\stackrel{˜}{X}\left({m}_{x},{M}_{x}\right)$ is not defined.

II) Next, we observe an example when ${\delta }_{f}\stackrel{˜}{X}$ is not equal to the difference RHS and LHS of Jensen's inequality and ${m}_{x}={M}_{x}$. If

 ${X}_{1}=\left(\begin{array}{ccc}-1& 0& 0\\ 0& -2& 0\\ 0& 0& -1\end{array}\right),\phantom{\rule{0.277778em}{0ex}}{X}_{2}=\left(\begin{array}{ccc}2& 0& 0\\ 0& 3& 0\\ 0& 0& 4\end{array}\right),\phantom{\rule{0.277778em}{0ex}}\text{then}\phantom{\rule{0.277778em}{0ex}}X=\frac{1}{2}\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)\phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}m=-1,\phantom{\rule{0.277778em}{0ex}}M=2$ ()

In this case $\stackrel{˜}{x}\left({m}_{x},{M}_{x}\right)$ is not defined, since ${m}_{x}={M}_{x}=1/2$. We have

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=\frac{1}{16}\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)<\left(\begin{array}{cc}\frac{17}{2}& 0\\ 0& \frac{97}{2}\end{array}\right)={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)$ ()

and putting $\overline{m}=-1$, $\overline{M}=2$ we obtain ${\delta }_{f}=135/8$, $\stackrel{˜}{X}={I}_{2}/2$ which give the following improvement

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=\frac{1}{16}\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)<\frac{1}{16}\left(\begin{array}{cc}1& 0\\ 0& 641\end{array}\right)={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)-\frac{135}{16}\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$ ()

III) Next, we observe an example with matrices that are not special. If

 ${X}_{1}=\left(\begin{array}{ccc}-4& 1& 1\\ 1& -2& -1\\ 1& -1& -1\end{array}\right)\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{X}_{2}=\left(\begin{array}{ccc}5& -1& -1\\ -1& 2& 1\\ -1& 1& 3\end{array}\right),\phantom{\rule{1.em}{0ex}}\text{then}\phantom{\rule{1.em}{0ex}}X=\frac{1}{2}\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)$ ()

so ${m}_{1}=-4.8662$, ${M}_{1}=-0.3446$, ${m}_{2}=1.3446$, ${M}_{2}=5.8662$, $m=-0.3446$, $M=1.3446$ and we put $\overline{m}=m$, $\overline{M}=M$ (rounded to four decimal places). We have

 ${\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=\frac{1}{16}\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)<\left(\begin{array}{cc}\frac{1283}{2}& -255\\ -255& \frac{237}{2}\end{array}\right)={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)$ ()

and its improvement

 $\begin{array}{ccc}& & {\left({\Phi }_{1}\left({X}_{1}\right)+{\Phi }_{2}\left({X}_{2}\right)\right)}^{4}=\frac{1}{16}\left(\begin{array}{cc}1& 0\\ 0& 0\end{array}\right)\hfill \\ & <& \left(\begin{array}{cc}639.9213& -255\\ -255& 117.8559\end{array}\right)={\Phi }_{1}\left({X}_{1}^{4}\right)+{\Phi }_{2}\left({X}_{2}^{4}\right)-\left(\begin{array}{cc}1.5787& 0\\ 0& 0.6441\end{array}\right)\hfill \end{array}$ ()

(rounded to four decimal places), since ${\delta }_{f}=3.1574$, $\stackrel{˜}{X}=\left(\begin{array}{cc}0.5& 0\\ 0& 0.2040\end{array}\right)$. But, if we put $\overline{m}={m}_{x}=0$, $\overline{M}={M}_{x}=0.5$, then $\stackrel{˜}{X}=\mathbf{0}$, so we do not have an improvement of Jensen's inequality. Also, if we put $\overline{m}=0$, $\overline{M}=1$, then $\stackrel{˜}{X}=0.5\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$, ${\delta }_{f}=7/8$ and ${\delta }_{f}\stackrel{˜}{X}=0.4375\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$, which is worse than the above improvement.

Putting ${\Phi }_{t}\left(y\right)={a}_{t}y$ for every $y\in 𝒜$, where ${a}_{t}\ge 0$ is a real number, we obtain the following obvious corollary of Theorem .

Corollary 14 Let ${\left({x}_{t}\right)}_{t\in T}$ be a bounded continuous field of self-adjoint elements in a unital ${C}^{*}$-algebra $𝒜$ defined on a locally compact Hausdorff space $T$ equipped with a bounded Radon measure $\mu$. Let ${m}_{t}$ and ${M}_{t}$, ${m}_{t}\le {M}_{t}$, be the bounds of ${x}_{t}$, $t\in T$. Let ${\left({a}_{t}\right)}_{t\in T}$ be a continuous field of nonnegative real numbers such that ${\int }_{T}{a}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)=1$. Let

 $\left({m}_{x},{M}_{x}\right)\cap \left[{m}_{t},{M}_{t}\right]=\varnothing ,\phantom{\rule{1.em}{0ex}}t\in T,\phantom{\rule{2.em}{0ex}}\text{and}\phantom{\rule{2.em}{0ex}}m ()

where ${m}_{x}$ and ${M}_{x}$, ${m}_{x}\le {M}_{x}$, are the bounds of the operator $x={\int }_{T}{\phi }_{t}\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)$ and

 $m=sup\left\{{M}_{t}:{M}_{t}\le {m}_{x},t\in T\right\},\phantom{\rule{0.277778em}{0ex}}M=inf\left\{{m}_{t}:{m}_{t}\ge {M}_{x},t\in T\right\}$ ()

If $f:I\to ℝ$ is a continuous convex $\left($resp. concave$\right)$ function provided that the interval $I$ contains all ${m}_{t},{M}_{t}$, then

 $\begin{array}{ccc}\hfill f\left({\int }_{T}{a}_{t}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)& \le & {\int }_{T}{a}_{t}f\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-{\delta }_{f}\stackrel{˜}{\stackrel{˜}{x}}\le {\int }_{T}{a}_{t}f\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\hfill \\ \hfill \left(\text{resp.}\phantom{\rule{1.em}{0ex}}f\left({\int }_{T}{a}_{t}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\right)& \ge & {\int }_{T}{a}_{t}f\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)+{\delta }_{f}\stackrel{˜}{\stackrel{˜}{x}}\ge {\int }_{T}{a}_{t}f\left({x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)\phantom{\rule{0.277778em}{0ex}}\right)\hfill \end{array}$ ()

holds, where ${\delta }_{f}$ is defined by (), $\stackrel{˜}{\stackrel{˜}{x}}=\frac{1}{2}{1}_{H}-\frac{1}{\overline{M}-\overline{m}}\left|{\int }_{T}{a}_{t}{x}_{t}\phantom{\rule{0.166667em}{0ex}}d\mu \left(t\right)-\frac{\overline{m}+\overline{M}}{2}{1}_{H}\right|$ and $\overline{m}\in \left[m,{m}_{A}\right]$, $\overline{M}\in \left[{M}_{A},M\right]$, $\overline{m}<\overline{M}$, are arbitrary numbers.

## 5. Extension Jensen's inequality

In this section we present an extension of Jensen's operator inequality for $n-$tuples of self-adjoint operators, unital $n-$tuples of positive linear mappings and real valued continuous convex functions with conditions on the spectra of the operators.

In a discrete version of Theorem  we prove that Jensen's operator inequality holds for every continuous convex function and for every $n-$tuple of self-adjoint operators $\left({A}_{1},...,{A}_{n}\right)$, for every $n-$tuple of positive linear mappings $\left({\Phi }_{1},...,{\Phi }_{n}\right)$ in the case when the interval with bounds of the operator $A={\sum }_{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ has no intersection points with the interval with bounds of the operator ${A}_{i}$ for each $i=1,...,n$, i.e. when $\left({m}_{A},{M}_{A}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$ for $i=1,...,n,$ where ${m}_{A}$ and ${M}_{A}$, ${m}_{A}\le {M}_{A}$, are the bounds of $A$, and ${m}_{i}$ and ${M}_{i}$, ${m}_{i}\le {M}_{i}$, are the bounds of ${A}_{i}$, $i=1,...,n$. It is interesting to consider the case when $\left({m}_{A},{M}_{A}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$ is valid for several $i\in \left\{1,...,n\right\}$, but not for all $i=1,...,n$. We study it in the following theorem (see [21]).

Theorem 15 Let $\left({A}_{1},...,{A}_{n}\right)$ be an $n-$tuple of self-adjoint operators ${A}_{i}\in B\left(H\right)$ with the bounds ${m}_{i}$ and ${M}_{i}$, ${m}_{i}\le {M}_{i}$, $i=1,...,n$. Let $\left({\Phi }_{1},...,{\Phi }_{n}\right)$ be an $n-$tuple of positive linear mappings ${\Phi }_{i}:B\left(H\right)\to B\left(K\right)$, such that ${\sum }_{i=1}^{n}{\Phi }_{i}\left({1}_{H}\right)={1}_{K}$. For $1\le {n}_{1}, we denote $m=min\left\{{m}_{1},...,{m}_{{n}_{1}}\right\}$, $M=max\left\{{M}_{1},...,{M}_{{n}_{1}}\right\}$ and ${\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({1}_{H}\right)=\alpha \phantom{\rule{0.166667em}{0ex}}{1}_{K}$, ${\sum }_{i={n}_{1}+1}^{n}{\Phi }_{i}\left({1}_{H}\right)=\beta \phantom{\rule{0.166667em}{0ex}}{1}_{K}$, where $\alpha ,\beta >0$, $\alpha +\beta =1$. If

 $\left(m,M\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}i={n}_{1}+1,...,n$ ()

and one of two equalities

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)=\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right)=\sum _{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ ()

is valid, then

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

holds for every continuous convex function $f:I\to ℝ$ provided that the interval $I$ contains all ${m}_{i},{M}_{i}$, $i=1,...,n$. If $f:I\to ℝ$ is concave, then the reverse inequality is valid in ().

We prove only the case when $f$ is a convex function. Let us denote

 $A=\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right),\phantom{\rule{2.em}{0ex}}B=\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right),\phantom{\rule{2.em}{0ex}}C=\sum _{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ ()

It is easy to verify that $A=B$ or $B=C$ or $A=C$ implies $A=B=C$.

a) Let $m. Since $f$ is convex on $\left[m,M\right]$ and $\left[{m}_{i},{M}_{i}\right]\subseteq \left[m,M\right]$ for $i=1,...,{n}_{1}$, then

 $f\left(z\right)\le \frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right),\phantom{\rule{1.em}{0ex}}z\in \left[{m}_{i},{M}_{i}\right]\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}i=1,...,{n}_{1}$ ()

but since $f$ is convex on all $\left[{m}_{i},{M}_{i}\right]$ and $\left(m,M\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$ for $i={n}_{1}+1,...,n$, then

 $f\left(z\right)\ge \frac{M-z}{M-m}f\left(m\right)+\frac{z-m}{M-m}f\left(M\right),\phantom{\rule{1.em}{0ex}}z\in \left[{m}_{i},{M}_{i}\right]\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}i={n}_{1}+1,...,n$ ()

Since ${m}_{i}{1}_{H}\le {A}_{i}\le {M}_{i}{1}_{H}$, $i=1,...,{n}_{1}$, it follows from ()

 $f\left({A}_{i}\right)\le \frac{M{1}_{H}-{A}_{i}}{M-m}f\left(m\right)+\frac{{A}_{i}-m{1}_{H}}{M-m}f\left(M\right),\phantom{\rule{2.em}{0ex}}i=1,...,{n}_{1}$ ()

Applying a positive linear mapping ${\Phi }_{i}$ and summing, we obtain

 $\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{M\alpha {1}_{K}-{\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)}{M-m}f\left(m\right)+\frac{{\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)-m\alpha {1}_{K}}{M-m}f\left(M\right)$ ()

since ${\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({1}_{H}\right)=\alpha {1}_{K}$. It follows

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{M{1}_{K}-A}{M-m}f\left(m\right)+\frac{A-m{1}_{K}}{M-m}f\left(M\right)$ ()

Similarly to () in the case ${m}_{i}{1}_{H}\le {A}_{i}\le {M}_{i}{1}_{H}$, $i={n}_{1}+1,...,n$, it follows from ()

 $\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\ge \frac{M{1}_{K}-B}{M-m}f\left(m\right)+\frac{B-m{1}_{K}}{M-m}f\left(M\right)$ ()

Combining () and () and taking into account that $A=B$, we obtain

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

It follows

 $\begin{array}{ccc}\hfill \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)& =& \sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\frac{\beta }{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\alpha +\beta =1\right)\hfill \\ & \le & \sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\left(\right)\right)\hfill \\ & =& \sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \\ & \le & \frac{\alpha }{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{1.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\left(\right)\right)\hfill \\ & =& \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{1.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\alpha +\beta =1\right)\hfill \end{array}$ ()

which gives the desired double inequality ().

b) Let $m=M$. Since $\left[{m}_{i},{M}_{i}\right]\subseteq \left[m,M\right]$ for $i=1,...,{n}_{1}$, then ${A}_{i}=m{1}_{H}$ and $f\left({A}_{i}\right)=f\left(m\right){1}_{H}$ for $i=1,...,{n}_{1}$. It follows

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)=m{1}_{K}\phantom{\rule{2.em}{0ex}}\text{and}\phantom{\rule{2.em}{0ex}}\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)=f\left(m\right){1}_{K}$ ()

On the other hand, since $f$ is convex on $I$, we have

 $f\left(z\right)\ge f\left(m\right)+l\left(m\right)\left(z-m\right)\phantom{\rule{1.em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{every}\phantom{\rule{4.pt}{0ex}}z\in I$ ()

where $l$ is the subdifferential of $f$. Replacing $z$ by ${A}_{i}$ for $i={n}_{1}+1,...,n$, applying ${\Phi }_{i}$ and summing, we obtain from () and ()

 $\begin{array}{ccc}\hfill \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)& \ge & f\left(m\right){1}_{K}+l\left(m\right)\left(\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right)-m{1}_{K}\right)\hfill \\ & =& f\left(m\right){1}_{K}=\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \end{array}$ ()

So () holds again. The remaining part of the proof is the same as in the case a).

Remark 16 We obtain the equivalent inequality to the one in Theorem  in the case when ${\sum }_{i=1}^{n}{\Phi }_{i}\left({1}_{H}\right)=\gamma \phantom{\rule{0.166667em}{0ex}}{1}_{K}$, for some positive scalar $\gamma$. If $\alpha +\beta =\gamma$ and one of two equalities

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)=\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right)=\frac{1}{\gamma }\sum _{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ ()

is valid, then

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\gamma }\sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

holds for every continuous convex function $f$.

Remark 17 Let the assumptions of Theorem  be valid.

1. We observe that the following inequality

 $f\left(\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

holds for every continuous convex function $f:I\to ℝ$.

Indeed, by the assumptions of Theorem  we have

 $m\alpha {1}_{H}\le \sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le M\alpha {1}_{H}\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)=\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ ()

which implies

 $m{1}_{H}\le \sum _{i={n}_{1}+1}^{n}\frac{1}{\beta }{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le M{1}_{H}$ ()

Also $\left(m,M\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$ for $i={n}_{1}+1,...,n$ and ${\sum }_{i={n}_{1}+1}^{n}\frac{1}{\beta }{\Phi }_{i}\left({1}_{H}\right)={1}_{K}$ hold. So we can apply Theorem  on operators ${A}_{{n}_{1}+1},...,{A}_{n}$ and mappings $\frac{1}{\beta }{\Phi }_{i}$ and obtain the desired inequality.

2. We denote by ${m}_{C}$ and ${M}_{C}$ the bounds of $C={\sum }_{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)$. If $\left({m}_{C},{M}_{C}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$, $i=1,...,{n}_{1}$ or $f$ is an operator convex function on $\left[m,M\right]$, then the double inequality () can be extended from the left side if we use Jensen's operator inequality (see Theorem 2.1[16])

 $\begin{array}{ccc}\hfill f\left(\sum _{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)\right)& =& f\left(\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)\right)\hfill \\ & \le & \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \end{array}$ ()

Example 18 If neither assumptions $\left({m}_{C},{M}_{C}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$, $i=1,...,{n}_{1}$, nor $f$ is operator convex in Remark  - 2. is satisfied and if $1<{n}_{1}, then () can not be extended by Jensen's operator inequality, since it is not valid. Indeed, for ${n}_{1}=2$ we define mappings ${\Phi }_{1},{\Phi }_{2}:{M}_{3}\left(ℂ\right)\to {M}_{2}\left(ℂ\right)$ by ${\Phi }_{1}\left({\left({a}_{ij}\right)}_{1\le i,j\le 3}\right)=\frac{\alpha }{2}{\left({a}_{ij}\right)}_{1\le i,j\le 2}$, ${\Phi }_{2}={\Phi }_{1}$. Then ${\Phi }_{1}\left({I}_{3}\right)+{\Phi }_{2}\left({I}_{3}\right)=\alpha {I}_{2}$. If

 ${A}_{1}=2\left(\begin{array}{ccc}1& 0& 1\\ 0& 0& 1\\ 1& 1& 1\end{array}\right)\phantom{\rule{1.em}{0ex}}\text{and}\phantom{\rule{1.em}{0ex}}{A}_{2}=2\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right)$ ()

then

 ${\left(\frac{1}{\alpha }{\Phi }_{1}\left({A}_{1}\right)+\frac{1}{\alpha }{\Phi }_{2}\left({A}_{2}\right)\right)}^{4}=\frac{1}{{\alpha }^{4}}\left(\begin{array}{cc}16& 0\\ 0& 0\end{array}\right)¬\le \frac{1}{\alpha }\left(\begin{array}{cc}80& 40\\ 40& 24\end{array}\right)=\frac{1}{\alpha }{\Phi }_{1}\left({A}_{1}^{4}\right)+\frac{1}{\alpha }{\Phi }_{2}\left({A}_{2}^{4}\right)$ ()

for every $\alpha \in \left(0,1\right)$. We observe that $f\left(t\right)={t}^{4}$ is not operator convex and $\left({m}_{C},{M}_{C}\right)\cap \left[{m}_{i},{M}_{i}\right]\ne \varnothing ,$ since $C=A=\frac{1}{\alpha }{\Phi }_{1}\left({A}_{1}\right)+\frac{1}{\alpha }{\Phi }_{2}\left({A}_{2}\right)=\frac{1}{\alpha }\left(\begin{array}{cc}2& 0\\ 0& 0\end{array}\right),$ $\left[{m}_{C},{M}_{C}\right]=\left[0,2/\alpha \right]$, $\left[{m}_{1},{M}_{1}\right]\subset \left[-1.60388,4.49396\right]$ and $\left[{m}_{2},{M}_{2}\right]=\left[0,2\right]$.

With respect to Remark , we obtain the following obvious corollary of Theorem .

Corollary 19 Let $\left({A}_{1},...,{A}_{n}\right)$ be an $n-$tuple of self-adjoint operators ${A}_{i}\in B\left(H\right)$ with the bounds ${m}_{i}$ and ${M}_{i}$, ${m}_{i}\le {M}_{i}$, $i=1,...,n$. For some $1\le {n}_{1}, we denote $m=min\left\{{m}_{1},...,{m}_{{n}_{1}}\right\}$, $M=max\left\{{M}_{1},...,{M}_{{n}_{1}}\right\}$. Let $\left({p}_{1},...,{p}_{n}\right)$ be an $n-$tuple of non-negative numbers, such that $0<{\sum }_{i=1}^{{n}_{1}}{p}_{i}={𝐩}_{{𝐧}_{\mathbf{1}}}<{𝐩}_{𝐧}={\sum }_{i=1}^{n}{p}_{i}$. If

 $\left(m,M\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}i={n}_{1}+1,...,n$ ()

and one of two equalities

 $\frac{1}{{𝐩}_{{𝐧}_{\mathbf{1}}}}\sum _{i=1}^{{n}_{1}}{p}_{i}{A}_{i}=\frac{1}{{𝐩}_{𝐧}}\sum _{i=1}^{n}{p}_{i}{A}_{i}=\frac{1}{{𝐩}_{𝐧}-{𝐩}_{{𝐧}_{\mathbf{1}}}}\sum _{i={n}_{1}+1}^{n}{p}_{i}{A}_{i}$ ()

is valid, then

 $\frac{1}{{𝐩}_{{𝐧}_{\mathbf{1}}}}\sum _{i=1}^{{n}_{1}}{p}_{i}f\left({A}_{i}\right)\le \frac{1}{{𝐩}_{𝐧}}\sum _{i=1}^{n}{p}_{i}f\left({A}_{i}\right)\le \frac{1}{{𝐩}_{𝐧}-{𝐩}_{{𝐧}_{\mathbf{1}}}}\sum _{i={n}_{1}+1}^{n}{p}_{i}f\left({A}_{i}\right)$ ()

holds for every continuous convex function $f:I\to ℝ$ provided that the interval $I$ contains all ${m}_{i},{M}_{i}$, $i=1,...,n$.

If $f:I\to ℝ$ is concave, then the reverse inequality is valid in ().

As a special case of Corollary  we can obtain a discrete version of Corollary  as follows.

Corollary 20 (Discrete version of Corollary ) Let $\left({A}_{1},...,{A}_{n}\right)$ be an $n-$tuple of self-adjoint operators ${A}_{i}\in B\left(H\right)$ with the bounds ${m}_{i}$ and ${M}_{i}$, ${m}_{i}\le {M}_{i}$, $i=1,...,n$. Let $\left({\alpha }_{1},...,{\alpha }_{n}\right)$ be an $n-$tuple of nonnegative real numbers such that ${\sum }_{i=1}^{n}{\alpha }_{i}=1$. If

 $\left({m}_{A},{M}_{A}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}i=1,...,n$ ()

where ${m}_{A}$ and ${M}_{A}$, ${m}_{A}\le {M}_{A}$, are the bounds of $A={\sum }_{i=1}^{n}{\alpha }_{i}{A}_{i}$, then

 $f\left(\sum _{i=1}^{n}{\alpha }_{i}{A}_{i}\right)\le \sum _{i=1}^{n}{\alpha }_{i}f\left({A}_{i}\right)$ ()

holds for every continuous convex function $f:I\to ℝ$ provided that the interval $I$ contains all ${m}_{i},{M}_{i}$.

We prove only the convex case. We define $\left(n+1\right)-$tuple of operators $\left({B}_{1},...,{B}_{n+1}\right)$, ${B}_{i}\in B\left(H\right)$, by ${B}_{1}=A={\sum }_{i=1}^{n}{\alpha }_{i}{A}_{i}$ and ${B}_{i}={A}_{i-1}$, $i=2,...,n+1$. Then ${m}_{{B}_{1}}={m}_{A}$, ${M}_{{B}_{1}}={M}_{A}$ are the bounds of ${B}_{1}$ and ${m}_{{B}_{i}}={m}_{i-1}$, ${M}_{{B}_{i}}={M}_{i-1}$ are the ones of ${B}_{i}$, $i=2,...,n+1$. Also, we define $\left(n+1\right)-$tuple of non-negative numbers $\left({p}_{1},...,{p}_{n+1}\right)$ by ${p}_{1}=1$ and ${p}_{i}={\alpha }_{i-1}$, $i=2,...,n+1$. Then ${\sum }_{i=1}^{n+1}{p}_{i}=2$ and by using () we have

 $\left({m}_{{B}_{1}},{M}_{{B}_{1}}\right)\cap \left[{m}_{{B}_{i}},{M}_{{B}_{i}}\right]=\varnothing ,\phantom{\rule{2.em}{0ex}}i=2,...,n+1$ ()

Since

 $\sum _{i=1}^{n+1}{p}_{i}{B}_{i}={B}_{1}+\sum _{i=2}^{n+1}{p}_{i}{B}_{i}=\sum _{i=1}^{n}{\alpha }_{i}{A}_{i}+\sum _{i=1}^{n}{\alpha }_{i}{A}_{i}=2{B}_{1}$ ()

then

 ${p}_{1}{B}_{1}=\frac{1}{2}\sum _{i=1}^{n+1}{p}_{i}{B}_{i}=\sum _{i=2}^{n+1}{p}_{i}{B}_{i}$ ()

Taking into account () and (), we can apply Corollary  for ${n}_{1}=1$ and ${B}_{i}$, ${p}_{i}$ as above, and we get

 ${p}_{1}f\left({B}_{1}\right)\le \frac{1}{2}\sum _{i=1}^{n+1}{p}_{i}f\left({B}_{i}\right)\le \sum _{i=2}^{n+1}{p}_{i}f\left({B}_{i}\right)$ ()

which gives the desired inequality ().

## 6. Extension of the refined Jensen's inequality

There is an extensive literature devoted to Jensen's inequality concerning different refinements and extensive results, see, for example [22], [23], [24], [25], [26], [27], [28], [29].

In this section we present an extension of the refined Jensen's inequality obtained in Section  and a refinement of the same inequality obtained in Section .

Theorem 21 Let $\left({A}_{1},...,{A}_{n}\right)$ be an $n-$tuple of self-adjoint operators ${A}_{i}\in B\left(H\right)$ with the bounds ${m}_{i}$ and ${M}_{i}$, ${m}_{i}\le {M}_{i}$, $i=1,...,n$. Let $\left({\Phi }_{1},...,{\Phi }_{n}\right)$ be an $n-$tuple of positive linear mappings ${\Phi }_{i}:B\left(H\right)\to B\left(K\right)$, such that ${\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({1}_{H}\right)=\alpha \phantom{\rule{0.166667em}{0ex}}{1}_{K}$, ${\sum }_{i={n}_{1}+1}^{n}{\Phi }_{i}\left({1}_{H}\right)=\beta \phantom{\rule{0.166667em}{0ex}}{1}_{K}$, where $1\le {n}_{1}, $\alpha ,\beta >0$ and $\alpha +\beta =1$. Let ${m}_{L}=min\left\{{m}_{1},...,{m}_{{n}_{1}}\right\}$, ${M}_{R}=max\left\{{M}_{1},...,{M}_{{n}_{1}}\right\}$ and

 $\begin{array}{ccc}\hfill m& =& max\left\{{M}_{i}:{M}_{i}\le {m}_{L},i\in \left\{{n}_{1}+1,...,n\right\}\right\}\hfill \\ \hfill M& =& min\left\{{m}_{i}:{m}_{i}\ge {M}_{R},i\in \left\{{n}_{1}+1,...,n\right\}\right\}\hfill \end{array}$ ()

If

 $\left({m}_{L},{M}_{R}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing ,\phantom{\rule{1.em}{0ex}}i={n}_{1}+1,...,n,\phantom{\rule{2.em}{0ex}}\text{and}\phantom{\rule{2.em}{0ex}}m ()

and one of two equalities

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)=\sum _{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)=\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ ()

is valid, then

 $\begin{array}{ccc}\hfill \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)& \le & \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\beta {\delta }_{f}\stackrel{˜}{A}\le \sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \\ & \le & \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\alpha {\delta }_{f}\stackrel{˜}{A}\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \end{array}$ ()

holds for every continuous convex function $f:I\to ℝ$ provided that the interval $I$ contains all ${m}_{i},{M}_{i}$, $i=1,...,n$, where

 $\begin{array}{c}{\delta }_{f}\equiv {\delta }_{f}\left(\overline{m},\overline{M}\right)=f\left(\overline{m}\right)+f\left(\overline{M}\right)-2f\left(\frac{\overline{m}+\overline{M}}{2}\right)\\ \stackrel{˜}{A}\equiv {\stackrel{˜}{A}}_{A,\Phi ,{n}_{1},\alpha }\left(\overline{m},\overline{M}\right)=\frac{1}{2}{1}_{K}-\frac{1}{\alpha \left(\overline{M}-\overline{m}\right)}\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(\left|{A}_{i}-\frac{\overline{m}+\overline{M}}{2}{1}_{H}\right|\right)\end{array}$ ()

and   $\overline{m}\in \left[m,{m}_{L}\right]$, $\overline{M}\in \left[{M}_{R},M\right]$, $\overline{m}<\overline{M}$,   are arbitrary numbers. If $f:I\to ℝ$ is concave, then the reverse inequality is valid in ().

We prove only the convex case. Let us denote

 $A=\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right),\phantom{\rule{2.em}{0ex}}B=\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left({A}_{i}\right),\phantom{\rule{2.em}{0ex}}C=\sum _{i=1}^{n}{\Phi }_{i}\left({A}_{i}\right)$ ()

It is easy to verify that $A=B$ or $B=C$ or $A=C$ implies $A=B=C$.

Since $f$ is convex on $\left[\overline{m},\overline{M}\right]$ and $\mathrm{𝖲𝗉}\left({A}_{i}\right)\subseteq \left[{m}_{i},{M}_{i}\right]\subseteq \left[\overline{m},\overline{M}\right]$ for $i=1,...,{n}_{1}$, it follows from Lemma  that

 $f\left({A}_{i}\right)\le \frac{\overline{M}{1}_{H}-{A}_{i}}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{{A}_{i}-\overline{m}{1}_{H}}{\overline{M}-\overline{m}}f\left(\overline{M}\right)-{\delta }_{f}{\stackrel{˜}{A}}_{i},\phantom{\rule{2.em}{0ex}}i=1,...,{n}_{1}$ ()

holds, where ${\delta }_{f}=f\left(\overline{m}\right)+f\left(\overline{M}\right)-2f\left(\frac{\overline{m}+\overline{M}}{2}\right)$ and ${\stackrel{˜}{A}}_{i}=\frac{1}{2}{1}_{H}-\frac{1}{\overline{M}-\overline{m}}\left|{A}_{i}-\frac{\overline{m}+\overline{M}}{2}{1}_{H}\right|$. Applying a positive linear mapping ${\Phi }_{i}$ and summing, we obtain

 $\begin{array}{ccc}\hfill \sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)& \le & \frac{\overline{M}\alpha {1}_{K}-{\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{{\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({A}_{i}\right)-\overline{m}\alpha {1}_{K}}{\overline{M}-\overline{m}}f\left(\overline{M}\right)\hfill \\ & -& {\delta }_{f}\left(\frac{\alpha }{2}{1}_{K}-\frac{1}{\overline{M}-\overline{m}}\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(\left|{A}_{i}-\frac{\overline{m}+\overline{M}}{2}{1}_{H}\right|\right)\right)\hfill \end{array}$ ()

since ${\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left({1}_{H}\right)=\alpha {1}_{K}$. It follows that

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{\overline{M}{1}_{K}-A}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{A-\overline{m}{1}_{K}}{\overline{M}-\overline{m}}f\left(\overline{M}\right)-{\delta }_{f}\stackrel{˜}{A}$ ()

where $\stackrel{˜}{A}=\frac{1}{2}{1}_{K}-\frac{1}{\alpha \left(\overline{M}-\overline{m}\right)}{\sum }_{i=1}^{{n}_{1}}{\Phi }_{i}\left(\left|{A}_{i}-\frac{\overline{m}+\overline{M}}{2}{1}_{H}\right|\right)$.

Additionally, since $f$ is convex on all $\left[{m}_{i},{M}_{i}\right]$ and $\left(\overline{m},\overline{M}\right)\cap \left[{m}_{i},{M}_{i}\right]=\varnothing$, $i={n}_{1}+1,...,n$, then

 $f\left({A}_{i}\right)\ge \frac{\overline{M}{1}_{H}-{A}_{i}}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{{A}_{i}-\overline{m}{1}_{H}}{\overline{M}-\overline{m}}f\left(\overline{M}\right),\phantom{\rule{2.em}{0ex}}i={n}_{1}+1,...,n$ ()

It follows

 $\frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-{\delta }_{f}\stackrel{˜}{A}\ge \frac{\overline{M}{1}_{K}-B}{\overline{M}-\overline{m}}f\left(\overline{m}\right)+\frac{B-\overline{m}{1}_{K}}{\overline{M}-\overline{m}}f\left(\overline{M}\right)-{\delta }_{f}\stackrel{˜}{A}$ ()

Combining () and () and taking into account that $A=B$, we obtain

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-{\delta }_{f}\stackrel{˜}{A}$ ()

Next, we obtain

 $\begin{array}{ccc}& & \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \\ & =& \sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\frac{\beta }{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\phantom{\rule{1.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\alpha +\beta =1\right)\hfill \\ & \le & \sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\beta {\delta }_{f}\stackrel{˜}{A}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\left(\right)\right)\hfill \\ & \le & \frac{\alpha }{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\alpha {\delta }_{f}\stackrel{˜}{A}+\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\beta {\delta }_{f}\stackrel{˜}{A}\phantom{\rule{2.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\left(\right)\right)\hfill \\ & =& \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-{\delta }_{f}\stackrel{˜}{A}\phantom{\rule{1.em}{0ex}}\left(\text{by}\phantom{\rule{0.277778em}{0ex}}\alpha +\beta =1\right)\hfill \end{array}$ ()

which gives the following double inequality

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\beta {\delta }_{f}\stackrel{˜}{A}\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-{\delta }_{f}\stackrel{˜}{A}$ ()

Adding $\beta {\delta }_{f}\stackrel{˜}{A}$ in the above inequalities, we get

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\beta {\delta }_{f}\stackrel{˜}{A}\le \sum _{i=1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\alpha {\delta }_{f}\stackrel{˜}{A}$ ()

Now, we remark that ${\delta }_{f}\ge 0$ and $\stackrel{˜}{A}\ge 0$. (Indeed, since $f$ is convex, then $f\left(\left(\overline{m}+\overline{M}\right)/2\right)\le \left(f\left(\overline{m}\right)+f\left(\overline{M}\right)\right)/2$, which implies that ${\delta }_{f}\ge 0$. Also, since

 $\mathrm{𝖲𝗉}\left({A}_{i}\right)\subseteq \left[\overline{m},\overline{M}\right]\phantom{\rule{1.em}{0ex}}⇒\phantom{\rule{1.em}{0ex}}\left|{A}_{i}-\frac{\overline{M}+\overline{m}}{2}{1}_{H}\right|\le \frac{\overline{M}-\overline{m}}{2}{1}_{H},\phantom{\rule{2.em}{0ex}}i=1,...,{n}_{1}$ ()

then

 $\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(\left|{A}_{i}-\frac{\overline{M}+\overline{m}}{2}{1}_{H}\right|\right)\le \frac{\overline{M}-\overline{m}}{2}\alpha {1}_{K}$ ()

which gives

 $0\le \frac{1}{2}{1}_{K}-\frac{1}{\alpha \left(\overline{M}-\overline{m}\right)}\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(\left|{A}_{i}-\frac{\overline{M}+\overline{m}}{2}{1}_{H}\right|\right)=\stackrel{˜}{A}\phantom{\rule{0.277778em}{0ex}}\right)$ ()

Consequently, the following inequalities

 $\begin{array}{ccc}& & \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\beta {\delta }_{f}\stackrel{˜}{A}\hfill \\ & & \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-\alpha {\delta }_{f}\stackrel{˜}{A}\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\hfill \end{array}$ ()

hold, which with () proves the desired series inequalities (). 1.05

Example 22 We observe the matrix case of Theorem  for $f\left(t\right)={t}^{4}$, which is the convex function but not operator convex, $n=4$, ${n}_{1}=2$ and the bounds of matrices as in Fig. 3.

#### Figure 3.

An example a convex function and the bounds of four operators

We show an example such that

 $\begin{array}{cc}& \frac{1}{\alpha }\left({\Phi }_{1}\left({A}_{1}^{4}\right)+{\Phi }_{2}\left({A}_{2}^{4}\right)\right)<\frac{1}{\alpha }\left({\Phi }_{1}\left({A}_{1}^{4}\right)+{\Phi }_{2}\left({A}_{2}^{4}\right)\right)+\beta {\delta }_{f}\stackrel{˜}{A}\\ & <{\Phi }_{1}\left({A}_{1}^{4}\right)+{\Phi }_{2}\left({A}_{2}^{4}\right)+{\Phi }_{3}\left({A}_{3}^{4}\right)+{\Phi }_{4}\left({A}_{4}^{4}\right)\\ & <\frac{1}{\beta }\left({\Phi }_{3}\left({A}_{3}^{4}\right)+{\Phi }_{4}\left({A}_{4}^{4}\right)\right)-\alpha {\delta }_{f}\stackrel{˜}{A}<\frac{1}{\beta }\left({\Phi }_{3}\left({A}_{3}^{4}\right)+{\Phi }_{4}\left({A}_{4}^{4}\right)\right)\end{array}$ ()

holds, where ${\delta }_{f}={\overline{M}}^{4}+{\overline{m}}^{4}-{\left(\overline{M}+\overline{m}\right)}^{4}8$ and

 $\stackrel{˜}{A}=\frac{1}{2}{I}_{2}-\frac{1}{\alpha \left(\overline{M}-\overline{m}\right)}\left({\Phi }_{1}\left(|{A}_{1}-\frac{\overline{M}+\overline{m}}{2}{I}_{h}|\right)+{\Phi }_{2}\left(|{A}_{2}-\frac{\overline{M}+\overline{m}}{2}{I}_{3}|\right)\right)$ ()

We define mappings ${\Phi }_{i}:{M}_{3}\left(ℂ\right)\to {M}_{2}\left(ℂ\right)$ as follows: ${\Phi }_{i}\left({\left({a}_{jk}\right)}_{1\le j,k\le 3}\right)=\frac{1}{4}{\left({a}_{jk}\right)}_{1\le j,k\le 2}$, $i=1,...,4$. Then ${\sum }_{i=1}^{4}{\Phi }_{i}\left({I}_{3}\right)={I}_{2}$ and $\alpha =\beta =\frac{1}{2}$.

Let

 ${A}_{1}=2\left(\begin{array}{ccc}2& 9/8& 1\\ 9/8& 2& 0\\ 1& 0& 3\end{array}\right),{A}_{2}=3\left(\begin{array}{ccc}2& 9/8& 0\\ 9/8& 1& 0\\ 0& 0& 2\end{array}\right),{A}_{3}=-3\left(\begin{array}{ccc}4& 1/2& 1\\ 1/2& 4& 0\\ 1& 0& 2\end{array}\right),{A}_{4}=12\left(\begin{array}{ccc}5/3& 1/2& 0\\ 1/2& 3/2& 0\\ 0& 0& 3\end{array}\right)$ ()

Then ${m}_{1}=1.28607$, ${M}_{1}=7.70771$, ${m}_{2}=0.53777$, ${M}_{2}=5.46221$, ${m}_{3}=-14.15050$, ${M}_{3}=-4.71071$, ${m}_{4}=12.91724$, ${M}_{4}=36.$, so ${m}_{L}={m}_{2}$, ${M}_{R}={M}_{1}$, $m={M}_{3}$ and $M={m}_{4}$ (rounded to five decimal places). Also,

 $\frac{1}{\alpha }\left({\Phi }_{1}\left({A}_{1}\right)+{\Phi }_{2}\left({A}_{2}\right)\right)=\frac{1}{\beta }\left({\Phi }_{3}\left({A}_{3}\right)+{\Phi }_{4}\left({A}_{4}\right)\right)=\left(\begin{array}{cc}4& 9/4\\ 9/4& 3\end{array}\right)$ ()

and

 $\begin{array}{ccc}\hfill {A}_{f}\equiv \frac{1}{\alpha }\left({\Phi }_{1}\left({A}_{1}^{4}\right)+{\Phi }_{2}\left({A}_{2}^{4}\right)\right)& =& \left(\begin{array}{cc}989.00391& 663.46875\\ 663.46875& 526.12891\end{array}\right)\hfill \\ \hfill {C}_{f}\equiv {\Phi }_{1}\left({A}_{1}^{4}\right)+{\Phi }_{2}\left({A}_{2}^{4}\right)+{\Phi }_{3}\left({A}_{3}^{4}\right)+{\Phi }_{4}\left({A}_{4}^{4}\right)& =& \left(\begin{array}{cc}68093.14258& 48477.98437\\ 48477.98437& 51335.39258\end{array}\right)\hfill \\ \hfill {B}_{f}\equiv \frac{1}{\beta }\left({\Phi }_{3}\left({A}_{3}^{4}\right)+{\Phi }_{4}\left({A}_{4}^{4}\right)\right)& =& \left(\begin{array}{cc}135197.28125& 96292.5\\ 96292.5& 102144.65625\end{array}\right)\hfill \end{array}$ ()

Then

 ${A}_{f}<{C}_{f}<{B}_{f}$ ()

holds (which is consistent with ()).

We will choose three pairs of numbers $\left(\overline{m},\overline{M}\right)$, $\overline{m}\in \left[-4.71071,0.53777\right]$, $\overline{M}\in \left[7.70771,12.91724\right]$ as follows

i) $\overline{m}={m}_{L}=0.53777$, $\overline{M}={M}_{R}=7.70771$, then

${\stackrel{˜}{\Delta }}_{1}=\beta {\delta }_{f}\stackrel{˜}{A}=0.5·2951.69249·\left(\begin{array}{cc}0.15678& 0.09030\\ 0.09030& 0.15943\end{array}\right)=\left(\begin{array}{cc}231.38908& 133.26139\\ 133.26139& 235.29515\end{array}\right)$

ii) $\overline{m}=m=-4.71071$, $\overline{M}=M=12.91724$, then

${\stackrel{˜}{\Delta }}_{2}=\beta {\delta }_{f}\stackrel{˜}{A}=0.5·27766.07963·\left(\begin{array}{cc}0.36022& 0.03573\\ 0.03573& 0.36155\end{array}\right)=\left(\begin{array}{cc}5000.89860& 496.04498\\ 496.04498& 5019.50711\end{array}\right)$

iii) $\overline{m}=-1$, $\overline{M}=10$, then

${\stackrel{˜}{\Delta }}_{3}=\beta {\delta }_{f}\stackrel{˜}{A}=0.5·9180.875·\left(\begin{array}{cc}0.28203& 0.08975\\ 0.08975& 0.27557\end{array}\right)=\left(\begin{array}{cc}1294.66& 411.999\\ 411.999& 1265.\end{array}\right)$

New, we obtain the following improvement of () (see ())

#### Table 1.

Using Theorem  we get the following result.

Corollary 23 Let the assumptions of Theorem  hold. Then

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+{\gamma }_{1}{\delta }_{f}\stackrel{˜}{A}\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

and

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)-{\gamma }_{2}{\delta }_{f}\stackrel{˜}{A}\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

holds for every ${\gamma }_{1},{\gamma }_{2}$ in the close interval joining $\alpha$ and $\beta$, where ${\delta }_{f}$ and $\stackrel{˜}{A}$ are defined by ().

Adding $\alpha {\delta }_{f}\stackrel{˜}{A}$ in () and noticing ${\delta }_{f}\stackrel{˜}{A}\ge 0$, we obtain

 $\frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)\le \frac{1}{\alpha }\sum _{i=1}^{{n}_{1}}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)+\alpha {\delta }_{f}\stackrel{˜}{A}\le \frac{1}{\beta }\sum _{i={n}_{1}+1}^{n}{\Phi }_{i}\left(f\left({A}_{i}\right)\right)$ ()

Taking into account the above inequality and the left hand side of () we obtain ().

Similarly, subtracting $\beta {\delta }_{f}\stackrel{˜}{A}$ in () we obtain ().

Remark 24 We can obtain extensions of inequalities which are given in Remark  and . Also, we can obtain a special case of Theorem  with the convex combination of operators ${A}_{i}$ putting ${\Phi }_{i}\left(B\right)={\alpha }_{i}B$, for $i=1,...,n$, similarly as in Corollary . Finally, applying this result, we can give another proof of Corollary . The interested reader can see the details in [30].

## References

1 - Davis C (1957) A Schwarz inequality for convex operator functions. Proc. Amer. Math. Soc. 8: 42-44.
2 - Choi M.D (1974) A Schwarz inequality for positive linear maps on ${C}^{*}$-algebras. Illinois J. Math. 18: 565-574.
3 - Hansen F, Pedersen G.K (1982) Jensen's inequality for operators and Löwner's theorem. Math. Ann. 258: 229-241.
4 - Mond B, Pečarić J (1995) On Jensen's inequality for operator convex functions. Houston J. Math. 21: 739-754.
5 - Hansen F, Pedersen G.K (2003) Jensen's operator inequality. Bull. London Math. Soc. 35: 553-564.
6 - Mond B, Pečarić J (1994) Converses of Jensen's inequality for several operators, Rev. Anal. Numér. Théor. Approx. 23: 179-183.
7 - Mond B, Pečarić J.E (1993) Converses of Jensen's inequality for linear maps of operators, An. Univ. Vest Timiş. Ser. Mat.-Inform. XXXI 2: 223-228.
8 - Furuta T (1998) Operator inequalities associated with Hölder-McCarthy and Kantorovich inequalities. J. Inequal. Appl. 2: 137-148.
9 - Mićić J, Seo Y, Takahasi S.E, Tominaga M (1999) Inequalities of Furuta and Mond-Pečarić. Math. Inequal. Appl. 2: 83-111.
10 - Mićić J, Pečarić J, Seo Y, Tominaga M (2000) Inequalities of positive linear maps on Hermitian matrices. Math. Inequal. Appl. 3: 559-591.
11 - Fujii M, Mićić Hot J, Pečarić J, Seo Y (2012) Recent Developments of Mond-Pečarić method in Operator Inequalities. Zagreb: Element. 320 p. In print.
12 - Fujii J.I (2011) An external version of the Jensen operator inequality. Sci. Math. Japon. Online e-2011: 59-62.
13 - Fujii J.I, Pečarić J, Seo Y (2012) The Jensen inequality in an external formula. J. Math. Inequal. Accepted for publication.
14 - Hansen F, Pečarić J, Perić I (2007) Jensen's operator inequality and it's converses. Math. Scand. 100: 61-73.
15 - Furuta T, Mićić Hot J, Pečarić J, Seo Y (2005) Mond-Pečarić Method in Operator Inequalities. Zagreb: Element. 262 p.
16 - Mićić J, Pečarić J, Seo Y (2010) Converses of Jensen's operator inequality. Oper. Matrices 4: 385-403.
17 - Mićić J, Pavić Z, Pečarić J (2011) Some better bounds in converses of the Jensen operator inequality. Oper. Matrices. In print.
18 - Mićić J, Pavić Z, Pečarić J (2011) Jensen's inequality for operators without operator convexity. Linear Algebra Appl. 434: 1228-1237.
19 - Mićić J, Pečarić J, Perić J (2012) Refined Jensen's operator inequality with condition on spectra. Oper. Matrices. Accepted for publication.
20 - Mitrinović D.S, Pečarić J.E, Fink A.M (1993) Classical and New Inequalities in Analysis. Dordrecht-Boston-London: Kluwer Acad. Publ. 740 p.
21 - Mićić J, Pavić Z, Pečarić J (2011) Extension of Jensen's operator inequality for operators without operator convexity. Abstr. Appl. Anal. 2011: 1-14.
22 - Abramovich S, Jameson G, Sinnamon G (2004) Refining Jensen's inequality, Bull. Math. Soc. Sci. Math. Roumanie (N.S.) 47: 3-14.
23 - Dragomir S.S (2010) A new refinement of Jensen's inequality in linear spaces with applications. Math. Comput. Modelling 52: 1497-1505.
24 - Khosravi M, Aujla J.S, Dragomir S.S, Moslehian M.S (2011) Refinements of Choi-Davis-Jensen's inequality. Bull. Math. Anal. Appl. 3: 127-133.
25 - Moslehian M.S (2009) Operator extensions of Hua's inequality. Linear Algebra Appl. 430: 1131-1139.
26 - Rooin J (2005) A refinement of Jensen's inequality, J. Ineq. Pure and Appl. Math. 6. 2. Art. 38: 4 p.
27 - Srivastava H.M, Xia Z.G, Zhang Z.H (2011) Some further refinements and extensions of the Hermite-Hadamard and Jensen inequalities in several variables. Math. Comput. Modelling 54: 2709-2717.
28 - Xiao Z.G, Srivastava H.M, Zhang Z.H (2010) Further refinements of the Jensen inequalities based upon samples with repetitions. Math. Comput. Modelling 51: 592-600.
29 - Wang L.C, Ma X.F, Liu L.H (2009) A note on some new refinements of Jensen's inequality for convex functions. J. Inequal. Pure Appl. Math. 10. 2. Art. 48: 6 p.
30 - Mićić J, Pečarić J, Perić J (2012) Extension of the refined Jensen's operator inequality with condition on spectra. Ann. Funct. Anal. 3: 67-85.