Opcode | Encoding | 16-bit | 32-bit | 64-bit | CPUID Feature Flag(s) | Description |
---|---|---|---|---|---|---|
66 0F 54 /r ANDPD xmm1, xmm2/m128 | rm | Invalid | Valid | Valid | sse2 | Logical AND packed double-precision floating-point values from xmm1 and xmm2/m128. Store the result in xmm1. |
VEX.128.66.0F.WIG 54 /r VANDPD xmm1, xmm2, xmm3/m128 | rvm | Invalid | Valid | Valid | avx | Logical AND packed double-precision floating-point values from xmm2 and xmm3/m128. Store the result in xmm1. |
VEX.256.66.0F.WIG 54 /r VANDPD ymm1, ymm2, ymm3/m256 | rvm | Invalid | Valid | Valid | avx | Logical AND packed double-precision floating-point values from ymm2 and ymm3/m256. Store the result in ymm1. |
EVEX.128.66.0F.W1 54 /r VANDPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-vl avx512-dq | Logical AND packed double-precision floating-point values from xmm2 and xmm3/m128/m64bcst. Store the result in xmm1. |
EVEX.256.66.0F.W1 54 /r VANDPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-vl avx512-dq | Logical AND packed double-precision floating-point values from ymm2 and ymm3/m256/m64bcst. Store the result in ymm1. |
EVEX.512.66.0F.W1 54 /r VANDPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst | ervm | Invalid | Valid | Valid | avx512-f avx512-dq | Logical AND packed double-precision floating-point values from zmm2 and zmm3/m512/m64bcst. Store the result in zmm1. |
Encoding
Encoding | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
---|---|---|---|---|
rm | n/a | ModRM.reg[rw] | ModRM.r/m[r] | |
rvm | n/a | ModRM.reg[rw] | VEX.vvvv[r] | ModRM.r/m[r] |
ervm | full | ModRM.reg[rw] | EVEX.vvvvv[r] | ModRM.r/m[r] |
Description
The (V)ANDPD
instruction ANDs two, four, or eight double-precision floating-point values from the two source operands. The result is stored in the destination operand.
All forms except the legacy SSE one will zero the upper (untouched) bits.
Operation
public void ANDPD(SimdU64 dest, SimdU64 src)
{
dest[0] &= src[0];
dest[1] &= src[1];
// dest[2..] is unmodified
}
void VANDPD_Vex(SimdU64 dest, SimdU64 src1, SimdU64 src2, int kl)
{
for (int n = 0; n < kl; n++)
dest[n] = src1[n] & src2[n];
dest[kl..] = 0;
}
public void VANDPD_Vex128(SimdU64 dest, SimdU64 src1, SimdU64 src2) =>
VANDPD_Vex(dest, src1, src2, 2);
public void VANDPD_Vex256(SimdU64 dest, SimdU64 src1, SimdU64 src2) =>
VANDPD_Vex(dest, src1, src2, 4);
void VANDPD_EvexMemory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k, int kl)
{
for (int n = 0; n < kl; n++)
{
if (k[n])
dest[n] = src1[n] & (EVEX.b ? src2[0] : src2[n]);
else if (EVEX.z)
dest[n] = 0;
// otherwise unchanged
}
dest[kl..] = 0;
}
public void VANDPD_Evex128Memory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDPD_EvexMemory(dest, src1, src2, k, 2);
public void VANDPD_Evex256Memory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDPD_EvexMemory(dest, src1, src2, k, 4);
public void VANDPD_Evex512Memory(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDPD_EvexMemory(dest, src1, src2, k, 8);
void VANDPD_EvexRegister(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k, int kl)
{
if (kl == 8 && EVEX.b)
OverrideRoundingModeForThisInstruction(EVEX.rc);
for (int n = 0; n < kl; n++)
{
if (k[n])
dest[n] = src1[n] & src2[n];
else if (EVEX.z)
dest[n] = 0;
// otherwise unchanged
}
dest[kl..] = 0;
}
public void VANDPD_Evex128Register(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDPD_EvexRegister(dest, src1, src2, k, 2);
public void VANDPD_Evex256Register(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDPD_EvexRegister(dest, src1, src2, k, 4);
public void VANDPD_Evex512Register(SimdU64 dest, SimdU64 src1, SimdU64 src2, KMask k) =>
VANDPD_EvexRegister(dest, src1, src2, k, 8);
Intrinsics
__m128d _mm_and_pd(__m128d a, __m128d b)
__m128d _mm_mask_and_pd(__m128d s, __mmask8 k, __m128d a, __m128d b)
__m128d _mm_maskz_and_pd(__mmask8 k, __m128d a, __m128d b)
__m256d _mm256_and_pd(__m256d a, __m256d b)
__m256d _mm256_mask_and_pd(__m256d s, __mmask8 k, __m256d a, __m256d b)
__m256d _mm256_maskz_and_pd(__mmask8 k, __m256d a, __m256d b)
__m512d _mm512_and_pd(__m512d a, __m512d b)
__m512d _mm512_mask_and_pd(__m512d s, __mmask8 k, __m512d a, __m512d b)
__m512d _mm512_maskz_and_pd(__mmask8 k, __m512d a, __m512d b)
Exceptions
SIMD Floating-Point
None.Other Exceptions
VEX Encoded Form: See Type 4 Exception Conditions.
EVEX Encoded Form: See Type E4 Exception Conditions.