Opcode | Encoding | 16-bit | 32-bit | 64-bit | CPUID Feature Flag(s) | Description |
---|---|---|---|---|---|---|
66 0F 55 /r ANDNPD xmm1, xmm2/m128 | rm | Invalid | Valid | Valid | sse2 | Logical AND packed double-precision floating-point values from xmm1 (inverted) and xmm2/m128. Store the result in xmm1. |
VEX.128.66.0F.WIG 55 /r VANDNPD xmm1, xmm2, xmm3/m128 | rvm | Invalid | Valid | Valid | avx | Logical AND packed double-precision floating-point values from xmm2 (inverted) and xmm3/m128. Store the result in xmm1. |
VEX.256.66.0F.WIG 55 /r VANDNPD ymm1, ymm2, ymm3/m256 | rvm | Invalid | Valid | Valid | avx | Logical AND packed double-precision floating-point values from ymm2 (inverted) and ymm3/m256. Store the result in ymm1. |
EVEX.128.66.0F.W1 55 /r VANDNPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-vl avx512-dq | Logical AND packed double-precision floating-point values from xmm2 (inverted) and xmm3/m128/m64bcst. Store the result in xmm1. |
EVEX.256.66.0F.W1 55 /r VANDNPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-vl avx512-dq | Logical AND packed double-precision floating-point values from ymm2 (inverted) and ymm3/m256/m64bcst. Store the result in ymm1. |
EVEX.512.66.0F.W1 55 /r VANDNPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-dq | Logical AND packed double-precision floating-point values from zmm2 (inverted) and zmm3/m512/m64bcst. Store the result in zmm1. |
Encoding
Encoding | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
---|---|---|---|---|
rm | n/a | ModRM.reg[rw] | ModRM.r/m[r] | |
rvm | n/a | ModRM.reg[rw] | VEX.vvvv[r] | ModRM.r/m[r] |
ervm | full | ModRM.reg[rw] | EVEX.vvvvv[r] | ModRM.r/m[r] |
Description
The (V)ANDNPD
instruction ANDs two, four, or eight double-precision floating-point values from the two source operands. The first source operand is inverted before being ANDed with the other source operand. The result is stored in the destination operand.
All forms except the legacy SSE one will zero the upper (untouched) bits.
Operation
public void ANDNPD(SimdU64 dest, SimdU64 src)
{
dest[0] = ~dest[0] & src[0];
dest[1] = ~dest[1] & src[1];
// dest[2..] is unmodified
}
void VANDNPD_Vex(SimdU64 dest, SimdU64 src1, SimdU64 src2, int kl)
{
for (int n = 0; n < kl; n++)
dest[n] = ~src1[n] & src2[n];
dest[kl..] = 0;
}
public void VANDNPD_Vex128(SimdU64 dest, SimdU64 src1, SimdU64 src2) =>
VANDNPD_Vex(dest, src1, src2, 2);
public void VANDNPD_Vex256(SimdU64 dest, SimdU64 src1, SimdU64 src2) =>
VANDNPD_Vex(dest, src1, src2, 4);
void VANDNPD_EvexMemory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k, int kl)
{
for (int n = 0; n < kl; n++)
{
if (k[n])
dest[n] = ~src1[n] & (EVEX.b ? src2[0] : src2[n]);
else if (EVEX.z)
dest[n] = 0;
// otherwise unchanged
}
dest[kl..] = 0;
}
public void VANDNPD_Evex128Memory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDNPD_EvexMemory(dest, src1, src2, k, 2);
public void VANDNPD_Evex256Memory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDNPD_EvexMemory(dest, src1, src2, k, 4);
public void VANDNPD_Evex512Memory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDNPD_EvexMemory(dest, src1, src2, k, 8);
void VANDNPD_EvexRegister(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k, int kl)
{
if (kl == 8 && EVEX.b)
OverrideRoundingModeForThisInstruction(EVEX.rc);
for (int n = 0; n < kl; n++)
{
if (k[n])
dest[n] = ~src1[n] & src2[n];
else if (EVEX.z)
dest[n] = 0;
// otherwise unchanged
}
dest[kl..] = 0;
}
public void VANDNPD_Evex128Register(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDNPD_EvexRegister(dest, src1, src2, k, 2);
public void VANDNPD_Evex256Register(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDNPD_EvexRegister(dest, src1, src2, k, 4);
public void VANDNPD_Evex512Register(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDNPD_EvexRegister(dest, src1, src2, k, 8);
Intrinsics
__m128d _mm_andnot_pd(__m128d a, __m128d b)
__m128d _mm_mask_andnot_pd(__m128d s, __mmask8 k, __m128d a, __m128d b)
__m128d _mm_maskz_andnot_pd(__mmask8 k, __m128d a, __m128d b)
__m256d _mm256_andnot_pd(__m256d a, __m256d b)
__m256d _mm256_mask_andnot_pd(__m256d s, __mmask8 k, __m256d a, __m256d b)
__m256d _mm256_maskz_andnot_pd(__mmask8 k, __m256d a, __m256d b)
__m512d _mm512_andnot_pd(__m512d a, __m512d b)
__m512d _mm512_mask_andnot_pd(__m512d s, __mmask8 k, __m512d a, __m512d b)
__m512d _mm512_maskz_andnot_pd(__mmask8 k, __m512d a, __m512d b)
Exceptions
SIMD Floating-Point
None.Other Exceptions
VEX Encoded Form: See Type 4 Exception Conditions.
EVEX Encoded Form: See Type E4 Exception Conditions.