cross_entropy#
- ivy.cross_entropy(true, pred, /, *, axis=None, epsilon=1e-07, reduction='mean', out=None)[source]#
Compute cross-entropy between predicted and true discrete distributions.
- Parameters:
true (
Union
[Array
,NativeArray
]) – input array containing true labels.pred (
Union
[Array
,NativeArray
]) – input array containing the predicted labels.axis (
Optional
[int
], default:None
) – the axis along which to compute the cross-entropy. If axis is-1
, the cross-entropy will be computed along the last dimension. Default:-1
.epsilon (
float
, default:1e-07
) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.out (
Optional
[Array
], default:None
) – optional output array, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
- Returns:
ret – The cross-entropy loss between the given distributions
Examples
>>> x = ivy.array([0, 0, 1, 0]) >>> y = ivy.array([0.25, 0.25, 0.25, 0.25]) >>> print(ivy.cross_entropy(x, y)) ivy.array(0.34657359)
>>> z = ivy.array([0.1, 0.1, 0.7, 0.1]) >>> print(ivy.cross_entropy(x, z)) ivy.array(0.08916873)
- Array.cross_entropy(self, pred, /, *, axis=-1, epsilon=1e-07, reduction='mean', out=None)[source]#
ivy.Array instance method variant of ivy.cross_entropy. This method simply wraps the function, and so the docstring for ivy.cross_entropy also applies to this method with minimal changes.
- Parameters:
self (
Array
) – input array containing true labels.pred (
Union
[Array
,NativeArray
]) – input array containing the predicted labels.axis (
int
, default:-1
) – the axis along which to compute the cross-entropy. If axis is-1
, the cross-entropy will be computed along the last dimension. Default:-1
.epsilon (
float
, default:1e-07
) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.out (
Optional
[Array
], default:None
) – optional output array, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
Array
- Returns:
ret – The cross-entropy loss between the given distributions.
Examples
>>> x = ivy.array([0, 0, 1, 0]) >>> y = ivy.array([0.25, 0.25, 0.25, 0.25]) >>> z = x.cross_entropy(y) >>> print(z) ivy.array(0.34657359)
- Container.cross_entropy(self, pred, /, *, axis=-1, epsilon=1e-07, reduction='mean', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container instance method variant of ivy.cross_entropy. This method simply wraps the function, and so the docstring for ivy.cross_entropy also applies to this method with minimal changes.
- Parameters:
self (
Container
) – input container containing true labels.pred (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the predicted labels.axis (
Union
[int
,Container
], default:-1
) – the axis along which to compute the cross-entropy. If axis is-1
, the cross-entropy will be computed along the last dimension. Default:-1
.epsilon (
Union
[float
,Container
], default:1e-07
) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
],Container
]], default:None
) – The key-chains to apply or not apply the method to. Default isNone
.to_apply (
Union
[bool
,Container
], default:True
) – If True, the method will be applied to key_chains, otherwise key_chains will be skipped. Default isTrue
.prune_unapplied (
Union
[bool
,Container
], default:False
) – Whether to prune key_chains for which the function was not applied. Default isFalse
.map_sequences (
Union
[bool
,Container
], default:False
) – Whether to also map method to sequences (lists, tuples). Default isFalse
.out (
Optional
[Container
], default:None
) – optional output container, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The cross-entropy loss between the given distributions.
Examples
>>> x = ivy.Container(a=ivy.array([1, 0, 0]),b=ivy.array([0, 0, 1])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = x.cross_entropy(y) >>> print(z) { a: ivy.array(0.17027519), b: ivy.array(0.53647931) }