log_poisson_loss#
- ivy.log_poisson_loss(true, pred, /, *, compute_full_loss=False, axis=-1, reduction='none', out=None)[source]#
Compute the log-likelihood loss between the prediction and the target under the assumption that the target has a Poisson distribution. Caveat: By default, this is not the exact loss, but the loss minus a constant term [log(z!)]. That has no effect for optimization, but does not play well with relative loss comparisons. To compute an approximation of the log factorial term, specify
compute_full_loss=True
to enable Stirling’s Approximation.- Parameters:
true (
Union
[Array
,NativeArray
]) – input array containing true labels.pred (
Union
[Array
,NativeArray
]) – input array containing Predicted labels.compute_full_loss (
bool
, default:False
) – whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. Default:False
.axis (
int
, default:-1
) – the axis along which to compute the log-likelihood loss. If axis is-1
, the log-likelihood loss will be computed along the last dimension. Default:-1
.reduction (
str
, default:'none'
) –'none'
: No reduction will be applied to the output.'mean'
: The output will be averaged.'sum'
: The output will be summed. Default:'none'
.out (
Optional
[Array
], default:None
) – optional output array, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
- Returns:
ret – The binary log-likelihood loss between the given distributions.
Examples
>>> x = ivy.array([0, 0, 1, 0]) >>> y = ivy.array([0.25, 0.25, 0.25, 0.25]) >>> print(ivy.log_poisson_loss(x, y)) ivy.array([1.28402555, 1.28402555, 1.03402555, 1.28402555])
>>> z = ivy.array([0.1, 0.1, 0.7, 0.1]) >>> print(ivy.log_poisson_loss(x, z, reduction='mean')) ivy.array(1.1573164)
- Array.log_poisson_loss(self, target, /, *, compute_full_loss=False, axis=-1, reduction='none', out=None)[source]#
ivy.Array instance method variant of ivy.log_poisson_loss. This method simply wraps the function, and so the docstring for ivy.l1_loss also applies to this method with minimal changes.
- Parameters:
self (
Union
[Array
,NativeArray
]) – input array containing true labels.target (
Union
[Array
,NativeArray
]) – input array containing targeted labels.compute_full_loss (
bool
, default:False
) – whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. Default:False
.axis (
int
, default:-1
) – the axis along which to compute the log-likelihood loss. If axis is-1
, the log-likelihood loss will be computed along the last dimension. Default:-1
.reduction (
str
, default:'none'
) –'none'
: No reduction will be applied to the output.'mean'
: The output will be averaged.'sum'
: The output will be summed. Default:'none'
.out (
Optional
[Array
], default:None
) – optional output array, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
Array
- Returns:
ret – The binary log-likelihood loss between the given distributions.
Examples
>>> x = ivy.array([0, 0, 1, 0]) >>> y = ivy.array([0.25, 0.25, 0.25, 0.25]) >>> loss = x.log_poisson_loss(y) >>> print(loss) ivy.array([1.28402555, 1.28402555, 1.03402555, 1.28402555])
>>> z = ivy.array([0.1, 0.1, 0.7, 0.1]) >>> loss = x.log_poisson_loss(z, reduction='mean') >>> print(loss) ivy.array(1.1573164)
- Container.log_poisson_loss(self, target, /, *, compute_full_loss=False, axis=-1, reduction='mean', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container instance method variant of ivy.log_poisson_loss. This method simply wraps the function, and so the docstring for ivy.log_poisson_loss also applies to this method with minimal changes.
- Parameters:
self (
Container
) – input container.target (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the targeticted values.compute_full_loss (
bool
, default:False
) – whether to compute the full loss. If false, a constant term is dropped in favor of more efficient optimization. Default:False
.axis (
int
, default:-1
) – the axis along which to compute the log-likelihood loss. If axis is-1
, the log-likelihood loss will be computed along the last dimension. Default:-1
.reduction (
Optional
[Union
[str
,Container
]], default:'mean'
) –'mean'
: The output will be averaged.'sum'
: The output will be summed.'none'
: No reduction will be applied to the output. Default:'none'
.key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
],Container
]], default:None
) – The key-chains to apply or not apply the method to. Default isNone
.to_apply (
Union
[bool
,Container
], default:True
) – If input, the method will be applied to key_chains, otherwise key_chains will be skipped. Default isinput
.prune_unapplied (
Union
[bool
,Container
], default:False
) – Whether to prune key_chains for which the function was not applied. Default isFalse
.map_sequences (
Union
[bool
,Container
], default:False
) – Whether to also map method to sequences (lists, tuples). Default isFalse
.out (
Optional
[Container
], default:None
) – optional output container, for writing the result to. It must have a shape that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The L1 loss between the input array and the targeticted values.
Examples
>>> x = ivy.Container(a=ivy.array([1, 2, 3]), b=ivy.array([4, 5, 6])) >>> y = ivy.Container(a=ivy.array([2, 2, 2]), b=ivy.array([5, 5, 5])) >>> z = x.log_poisson_loss(y) >>> print(z) { a: ivy.array(3.3890561), b: ivy.array(123.413159) }