Opcode | Encoding | 16-bit | 32-bit | 64-bit | CPUID Feature Flag(s) | Description |
---|---|---|---|---|---|---|
NP 0F 5E /r DIVPS xmm1, xmm2/m128 | rm | Invalid | Valid | Valid | sse | Divide packed single-precision floating-point values in xmm1 by those in xmm2/m128. Store the result in xmm1. |
VEX.128.NP.0F.WIG 5E /r VDIVPS xmm1, xmm2, xmm3/m128 | rvm | Invalid | Valid | Valid | avx | Divide packed single-precision floating-point values in xmm2 by those in xmm2/m128. Store the result in xmm1. |
VEX.256.NP.0F.WIG 5E /r VDIVPS ymm1, ymm2, ymm3/m256 | rvm | Invalid | Valid | Valid | avx | Divide packed single-precision floating-point values in ymm2 by those in ymm3/m256. Store the result in ymm1. |
EVEX.128.NP.0F.W0 5E /r VDIVPS xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-vl | Divide packed single-precision floating-point values in xmm2 by those in xmm3/m128/m64bcst. Store the result in xmm1. |
EVEX.256.NP.0F.W0 5E /r VDIVPS ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-vl | Divide packed single-precision floating-point values in ymm2 by those in ymm3/m256/m64bcst. Store the result in ymm1. |
EVEX.512.NP.0F.W0 5E /r VDIVPS zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er} | ervm | Invalid | Valid | Valid | avx512-f | Divide packed single-precision floating-point values in zmm2 by those in zmm3/m512/m64bcst. Store the result in zmm1. |
Encoding
Encoding | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
---|---|---|---|---|
rm | n/a | ModRM.reg[rw] | ModRM.r/m[r] | |
rvm | n/a | ModRM.reg[rw] | VEX.vvvv[r] | ModRM.r/m[r] |
ervm | full | ModRM.reg[rw] | EVEX.vvvvv[r] | ModRM.r/m[r] |
Description
The (V)DIVPS
instruction divides four, eight, or 16 packed single-precision floating-point values from the first source operand by those in the second. The result is stored in the destination operand.
All forms except the legacy SSE one will zero the upper (untouched) bits.
Operation
public void DIVPS(SimdF32 dest, SimdF32 src)
{
dest[0] /= src[0];
dest[1] /= src[1];
dest[2] /= src[2];
dest[3] /= src[3];
// dest[4..] is unmodified
}
void VDIVPS_Vex(SimdF32 dest, SimdF32 src1, SimdF32 src2, int kl)
{
for (int n = 0; n < kl; n++)
dest[n] = src1[n] / src2[n];
dest[kl..] = 0;
}
public void VDIVPS_Vex128(SimdF32 dest, SimdF32 src1, SimdF32 src2) =>
VDIVPS_Vex(dest, src1, src2, 4);
public void VDIVPS_Vex256(SimdF32 dest, SimdF32 src1, SimdF32 src2) =>
VDIVPS_Vex(dest, src1, src2, 8);
void VDIVPS_EvexMemory(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k, int kl)
{
for (int n = 0; n < kl; n++)
{
if (k[n])
dest[n] = src1[n] / (EVEX.b ? src2[0] : src2[n]);
else if (EVEX.z)
dest[n] = 0;
// otherwise unchanged
}
dest[kl..] = 0;
}
public void VDIVPS_Evex128Memory(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k) =>
VDIVPS_EvexMemory(dest, src1, src2, k, 4);
public void VDIVPS_Evex256Memory(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k) =>
VDIVPS_EvexMemory(dest, src1, src2, k, 8);
public void VDIVPS_Evex512Memory(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k) =>
VDIVPS_EvexMemory(dest, src1, src2, k, 16);
void VDIVPS_EvexRegister(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k, int kl)
{
if (kl == 16 && EVEX.b)
OverrideRoundingModeForThisInstruction(EVEX.rc);
for (int n = 0; n < kl; n++)
{
if (k[n])
dest[n] = src1[n] / src2[n];
else if (EVEX.z)
dest[n] = 0;
// otherwise unchanged
}
dest[kl..] = 0;
}
public void VDIVPS_Evex128Register(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k) =>
VDIVPS_EvexRegister(dest, src1, src2, k, 4);
public void VDIVPS_Evex256Register(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k) =>
VDIVPS_EvexRegister(dest, src1, src2, k, 8);
public void VDIVPS_Evex512Register(SimdF32 dest, SimdF32 src1, SimdF32 src2, KMask k) =>
VDIVPS_EvexRegister(dest, src1, src2, k, 16);
Intrinsics
__m128d _mm_div_ps(__m128d a, __m128d b)
__m128d _mm_mask_div_ps(__m128d s, __mmask8 k, __m128d a, __m128d b)
__m128d _mm_maskz_div_ps(__mmask8 k, __m128d a, __m128d b)
__m256d _mm256_div_ps(__m256d a, __m256d b)
__m256d _mm256_mask_div_ps(__m256d s, __mmask8 k, __m256d a, __m256d b)
__m256d _mm256_maskz_div_ps(__mmask8 k, __m256d a, __m256d b)
__m512d _mm512_div_ps(__m512d a, __m512d b)
__m512d _mm512_div_round_ps(__m512d a, __m512d b, const int rounding)
__m512d _mm512_mask_div_ps(__m512d s, __mmask8 k, __m512d a, __m512d b)
__m512d _mm512_mask_div_round_ps(__m512d s, __mmask8 k, __m512d a, __m512d b, const int rounding)
__m512d _mm512_maskz_div_ps(__mmask8 k, __m512d a, __m512d b)
__m512d _mm512_maskz_div_round_ps(__mmask8 k, __m512d a, __m512d b, const int rounding)
Exceptions
SIMD Floating-Point
#XM
#D
- Denormal operand.#I
- Invalid operation.#O
- Numeric overflow.#P
- Inexact result.#U
- Numeric underflow.#Z
- Divide-by-zero.
Other Exceptions
VEX Encoded Form: See Type 2 Exception Conditions.
EVEX Encoded Form: See Type E2 Exception Conditions.