Summary
Detail | |||
---|---|---|---|
Vendor | Xen | First view | 2017-01-26 |
Product | Xen | Last view | 2024-12-19 |
Version | 4.8.0 | Type | Os |
Update | rc3 | ||
Edition | * | ||
Language | * | ||
Sofware Edition | * | ||
Target Software | * | ||
Target Hardware | * | ||
Other | * | ||
CPE Product | cpe:2.3:o:xen:xen |
Activity : Overall
Related : CVE
Date | Alert | Description | |
---|---|---|---|
0 | 2024-12-19 | CVE-2024-45818 | The hypervisor contains code to accelerate VGA memory accesses for HVM guests, when the (virtual) VGA is in "standard" mode. Locking involved there has an unusual discipline, leaving a lock acquired past the return from the function that acquired it. This behavior results in a problem when emulating an instruction with two memory accesses, both of which touch VGA memory (plus some further constraints which aren't relevant here). When emulating the 2nd access, the lock that is already being held would be attempted to be re-acquired, resulting in a deadlock. This deadlock was already found when the code was first introduced, but was analysed incorrectly and the fix was incomplete. Analysis in light of the new finding cannot find a way to make the existing locking discipline work. In staging, this logic has all been removed because it was discovered to be accidentally disabled since Xen 4.7. Therefore, we are fixing the locking problem by backporting the removal of most of the feature. Note that even with the feature disabled, the lock would still be acquired for any accesses to the VGA MMIO region. |
3.3 | 2024-01-05 | CVE-2023-46837 | Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory. This undefined behavior was meant to be addressed by XSA-437, but the approach was not sufficient. |
5.5 | 2024-01-05 | CVE-2023-34328 | [This CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] AMD CPUs since ~2014 have extensions to normal x86 debugging functionality. Xen supports guests using these extensions. Unfortunately there are errors in Xen's handling of the guest state, leading to denials of service. 1) CVE-2023-34327 - An HVM vCPU can end up operating in the context of 2) CVE-2023-34328 - A PV vCPU can place a breakpoint over the live GDT. |
5.5 | 2024-01-05 | CVE-2023-34323 | When a transaction is committed, C Xenstored will first check the quota is correct before attempting to commit any nodes. It would be possible that accounting is temporarily negative if a node has been removed outside of the transaction. Unfortunately, some versions of C Xenstored are assuming that the quota cannot be negative and are using assert() to confirm it. This will lead to C Xenstored crash when tools are built without -DNDEBUG (this is the default). |
7.8 | 2024-01-05 | CVE-2023-34322 | For migration as well as to work around kernels unaware of L1TF (see XSA-273), PV guests may be run in shadow paging mode. Since Xen itself needs to be mapped when PV guests run, Xen and shadowed PV guests run directly the respective shadow page tables. For 64-bit PV guests this means running on the shadow of the guest root page table. In the course of dealing with shortage of memory in the shadow pool associated with a domain, shadows of page tables may be torn down. This tearing down may include the shadow root page table that the CPU in question is presently running on. While a precaution exists to supposedly prevent the tearing down of the underlying live page table, the time window covered by that precaution isn't large enough. |
3.3 | 2024-01-05 | CVE-2023-34321 | Arm provides multiple helpers to clean & invalidate the cache for a given region. This is, for instance, used when allocating guest memory to ensure any writes (such as the ones during scrubbing) have reached memory before handing over the page to a guest. Unfortunately, the arithmetics in the helpers can overflow and would then result to skip the cache cleaning/invalidation. Therefore there is no guarantee when all the writes will reach the memory. |
6.5 | 2023-03-21 | CVE-2022-42334 | x86/HVM pinned cache attributes mis-handling T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] To allow cachability control for HVM guests with passed through devices, an interface exists to explicitly override defaults which would otherwise be put in place. While not exposed to the affected guests themselves, the interface specifically exists for domains controlling such guests. This interface may therefore be used by not fully privileged entities, e.g. qemu running deprivileged in Dom0 or qemu running in a so called stub-domain. With this exposure it is an issue that - the number of the such controlled regions was unbounded (CVE-2022-42333), - installation and removal of such regions was not properly serialized (CVE-2022-42334). |
8.6 | 2023-03-21 | CVE-2022-42333 | x86/HVM pinned cache attributes mis-handling T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] To allow cachability control for HVM guests with passed through devices, an interface exists to explicitly override defaults which would otherwise be put in place. While not exposed to the affected guests themselves, the interface specifically exists for domains controlling such guests. This interface may therefore be used by not fully privileged entities, e.g. qemu running deprivileged in Dom0 or qemu running in a so called stub-domain. With this exposure it is an issue that - the number of the such controlled regions was unbounded (CVE-2022-42333), - installation and removal of such regions was not properly serialized (CVE-2022-42334). |
5.5 | 2023-03-21 | CVE-2022-42331 | x86: speculative vulnerability in 32bit SYSCALL path Due to an oversight in the very original Spectre/Meltdown security work (XSA-254), one entrypath performs its speculation-safety actions too late. In some configurations, there is an unprotected RET instruction which can be attacked with a variety of speculative attacks. |
5.5 | 2022-11-01 | CVE-2022-42310 | Xenstore: Guests can create orphaned Xenstore nodes By creating multiple nodes inside a transaction resulting in an error, a malicious guest can create orphaned nodes in the Xenstore data base, as the cleanup after the error will not remove all nodes already created. When the transaction is committed after this situation, nodes without a valid parent can be made permanent in the data base. |
6.5 | 2022-10-11 | CVE-2022-33746 | P2M pool freeing may take excessively long The P2M pool backing second level address translation for guests may be of significant size. Therefore its freeing may take more time than is reasonable without intermediate preemption checks. Such checking for the need to preempt was so far missing. |
7 | 2022-04-05 | CVE-2022-26357 | race in VT-d domain ID cleanup Xen domain IDs are up to 15 bits wide. VT-d hardware may allow for only less than 15 bits to hold a domain ID associating a physical device with a particular domain. Therefore internally Xen domain IDs are mapped to the smaller value range. The cleaning up of the housekeeping structures has a race, allowing for VT-d domain IDs to be leaked and flushes to be bypassed. |
5.6 | 2022-04-05 | CVE-2022-26356 | Racy interactions between dirty vram tracking and paging log dirty hypercalls Activation of log dirty mode done by XEN_DMOP_track_dirty_vram (was named HVMOP_track_dirty_vram before Xen 4.9) is racy with ongoing log dirty hypercalls. A suitably timed call to XEN_DMOP_track_dirty_vram can enable log dirty while another CPU is still in the process of tearing down the structures related to a previously enabled log dirty mode (XEN_DOMCTL_SHADOW_OP_OFF). This is due to lack of mutually exclusive locking between both operations and can lead to entries being added in already freed slots, resulting in a memory leak. |
5.5 | 2022-01-25 | CVE-2022-23034 | A PV guest could DoS Xen while unmapping a grant To address XSA-380, reference counting was introduced for grant mappings for the case where a PV guest would have the IOMMU enabled. PV guests can request two forms of mappings. When both are in use for any individual mapping, unmapping of such a mapping can be requested in two steps. The reference count for such a mapping would then mistakenly be decremented twice. Underflow of the counters gets detected, resulting in the triggering of a hypervisor bug check. |
7 | 2021-12-07 | CVE-2021-28703 | grant table v2 status pages may remain accessible after de-allocation (take two) Guest get permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. Upon switching back from v2 to v1, the guest would then retain access to a page that was freed and perhaps re-used for other purposes. This bug was fortuitously fixed by code cleanup in Xen 4.14, and backported to security-supported Xen branches as a prerequisite of the fix for XSA-378. |
7.8 | 2021-11-24 | CVE-2021-28709 | issues with partially successful P2M updates on x86 T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). In some cases the hypervisor carries out the requests by splitting them into smaller chunks. Error handling in certain PoD cases has been insufficient in that in particular partial success of some operations was not properly accounted for. There are two code paths affected - page removal (CVE-2021-28705) and insertion of new pages (CVE-2021-28709). (We provide one patch which combines the fix to both issues.) |
8.8 | 2021-11-24 | CVE-2021-28708 | PoD operations on misaligned GFNs T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). The implementation of some of these hypercalls for PoD does not enforce the base page frame number to be suitably aligned for the specified order, yet some code involved in PoD handling actually makes such an assumption. These operations are XENMEM_decrease_reservation (CVE-2021-28704) and XENMEM_populate_physmap (CVE-2021-28707), the latter usable only by domains controlling the guest, i.e. a de-privileged qemu or a stub domain. (Patch 1, combining the fix to both these two issues.) In addition handling of XENMEM_decrease_reservation can also trigger a host crash when the specified page order is neither 4k nor 2M nor 1G (CVE-2021-28708, patch 2). |
8.8 | 2021-11-24 | CVE-2021-28707 | PoD operations on misaligned GFNs T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). The implementation of some of these hypercalls for PoD does not enforce the base page frame number to be suitably aligned for the specified order, yet some code involved in PoD handling actually makes such an assumption. These operations are XENMEM_decrease_reservation (CVE-2021-28704) and XENMEM_populate_physmap (CVE-2021-28707), the latter usable only by domains controlling the guest, i.e. a de-privileged qemu or a stub domain. (Patch 1, combining the fix to both these two issues.) In addition handling of XENMEM_decrease_reservation can also trigger a host crash when the specified page order is neither 4k nor 2M nor 1G (CVE-2021-28708, patch 2). |
8.6 | 2021-11-24 | CVE-2021-28706 | guests may exceed their designated memory limit When a guest is permitted to have close to 16TiB of memory, it may be able to issue hypercalls to increase its memory allocation beyond the administrator established limit. This is a result of a calculation done with 32-bit precision, which may overflow. It would then only be the overflowed (and hence small) number which gets compared against the established upper bound. |
7.8 | 2021-11-24 | CVE-2021-28705 | issues with partially successful P2M updates on x86 T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). In some cases the hypervisor carries out the requests by splitting them into smaller chunks. Error handling in certain PoD cases has been insufficient in that in particular partial success of some operations was not properly accounted for. There are two code paths affected - page removal (CVE-2021-28705) and insertion of new pages (CVE-2021-28709). (We provide one patch which combines the fix to both issues.) |
8.8 | 2021-11-24 | CVE-2021-28704 | PoD operations on misaligned GFNs T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). The implementation of some of these hypercalls for PoD does not enforce the base page frame number to be suitably aligned for the specified order, yet some code involved in PoD handling actually makes such an assumption. These operations are XENMEM_decrease_reservation (CVE-2021-28704) and XENMEM_populate_physmap (CVE-2021-28707), the latter usable only by domains controlling the guest, i.e. a de-privileged qemu or a stub domain. (Patch 1, combining the fix to both these two issues.) In addition handling of XENMEM_decrease_reservation can also trigger a host crash when the specified page order is neither 4k nor 2M nor 1G (CVE-2021-28708, patch 2). |
7.6 | 2021-10-06 | CVE-2021-28702 | PCI devices with RMRRs not deassigned correctly Certain PCI devices in a system might be assigned Reserved Memory Regions (specified via Reserved Memory Region Reporting, "RMRR"). These are typically used for platform tasks such as legacy USB emulation. If such a device is passed through to a guest, then on guest shutdown the device is not properly deassigned. The IOMMU configuration for these devices which are not properly deassigned ends up pointing to a freed data structure, including the IO Pagetables. Subsequent DMA or interrupts from the device will have unpredictable behaviour, ranging from IOMMU faults to memory corruption. |
7.8 | 2021-08-27 | CVE-2021-28697 | grant table v2 status pages may remain accessible after de-allocation Guest get permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. Upon switching back from v2 to v1, the guest would then retain access to a page that was freed and perhaps re-used for other purposes. |
5.5 | 2021-06-30 | CVE-2021-28693 | xen/arm: Boot modules are not scrubbed The bootloader will load boot modules (e.g. kernel, initramfs...) in a temporary area before they are copied by Xen to each domain memory. To ensure sensitive data is not leaked from the modules, Xen must "scrub" them before handing the page over to the allocator. Unfortunately, it was discovered that modules will not be scrubbed on Arm. |
6.5 | 2021-06-29 | CVE-2021-28690 | x86: TSX Async Abort protections not restored after S3 This issue relates to the TSX Async Abort speculative security vulnerability. Please see https://xenbits.xen.org/xsa/advisory-305.html for details. Mitigating TAA by disabling TSX (the default and preferred option) requires selecting a non-default setting in MSR_TSX_CTRL. This setting isn't restored after S3 suspend. |
CWE : Common Weakness Enumeration
% | id | Name |
---|---|---|
12% (14) | CWE-362 | Race Condition |
8% (10) | CWE-119 | Failure to Constrain Operations within the Bounds of a Memory Buffer |
7% (8) | CWE-770 | Allocation of Resources Without Limits or Throttling |
7% (8) | CWE-20 | Improper Input Validation |
6% (7) | CWE-400 | Uncontrolled Resource Consumption ('Resource Exhaustion') |
5% (6) | CWE-755 | Improper Handling of Exceptional Conditions |
5% (6) | CWE-476 | NULL Pointer Dereference |
4% (5) | CWE-200 | Information Exposure |
3% (4) | CWE-787 | Out-of-bounds Write |
3% (4) | CWE-416 | Use After Free |
3% (4) | CWE-269 | Improper Privilege Management |
2% (3) | CWE-401 | Failure to Release Memory Before Removing Last Reference ('Memory L... |
1% (2) | CWE-754 | Improper Check for Unusual or Exceptional Conditions |
1% (2) | CWE-670 | Always-Incorrect Control Flow Implementation |
1% (2) | CWE-667 | Insufficient Locking |
1% (2) | CWE-662 | Insufficient Synchronization |
1% (2) | CWE-459 | Incomplete Cleanup |
1% (2) | CWE-212 | Improper Cross-boundary Removal of Sensitive Data |
1% (2) | CWE-193 | Off-by-one Error |
1% (2) | CWE-125 | Out-of-bounds Read |
0% (1) | CWE-772 | Missing Release of Resource after Effective Lifetime |
0% (1) | CWE-732 | Incorrect Permission Assignment for Critical Resource |
0% (1) | CWE-682 | Incorrect Calculation |
0% (1) | CWE-674 | Uncontrolled Recursion |
0% (1) | CWE-668 | Exposure of Resource to Wrong Sphere |
Snort® IPS/IDS
Date | Description |
---|---|
2019-09-24 | OMRON CX-One MCI file stack buffer overflow attempt RuleID : 51192 - Type : FILE-OTHER - Revision : 1 |
2019-09-24 | OMRON CX-One MCI file stack buffer overflow attempt RuleID : 51191 - Type : FILE-OTHER - Revision : 1 |
Nessus® Vulnerability Scanner
id | Description |
---|---|
2019-01-15 | Name: The remote Debian host is missing a security-related update. File: debian_DSA-4369.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing a security update. File: fedora_2018-683dfde81a.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing a security update. File: fedora_2018-73dd8de892.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing one or more security updates. File: fedora_2018-8422d94975.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing a security update. File: fedora_2018-a24754252a.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing a security update. File: fedora_2018-a7862a75f5.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing a security update. File: fedora_2018-a7ac26523d.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing one or more security updates. File: fedora_2018-cc812838fb.nasl - Type: ACT_GATHER_INFO |
2019-01-03 | Name: The remote Fedora host is missing a security update. File: fedora_2018-dbebca30d0.nasl - Type: ACT_GATHER_INFO |
2018-11-26 | Name: A server virtualization platform installed on the remote host is missing a se... File: citrix_xenserver_CTX239432.nasl - Type: ACT_GATHER_INFO |
2018-11-13 | Name: The remote Debian host is missing a security update. File: debian_DLA-1577.nasl - Type: ACT_GATHER_INFO |
2018-11-13 | Name: The remote Fedora host is missing a security update. File: fedora_2018-f20a0cead5.nasl - Type: ACT_GATHER_INFO |
2018-11-09 | Name: A server virtualization platform installed on the remote host is missing a se... File: citrix_xenserver_CTX239100.nasl - Type: ACT_GATHER_INFO |
2018-10-31 | Name: The remote Debian host is missing a security update. File: debian_DLA-1559.nasl - Type: ACT_GATHER_INFO |
2018-10-31 | Name: The remote Gentoo host is missing one or more security-related patches. File: gentoo_GLSA-201810-06.nasl - Type: ACT_GATHER_INFO |
2018-10-19 | Name: The remote Debian host is missing a security update. File: debian_DLA-1549.nasl - Type: ACT_GATHER_INFO |
2018-10-10 | Name: The remote Debian host is missing a security-related update. File: debian_DSA-4313.nasl - Type: ACT_GATHER_INFO |
2018-10-04 | Name: The remote Debian host is missing a security update. File: debian_DLA-1531.nasl - Type: ACT_GATHER_INFO |
2018-10-02 | Name: The remote Debian host is missing a security-related update. File: debian_DSA-4308.nasl - Type: ACT_GATHER_INFO |
2018-09-04 | Name: The remote Fedora host is missing a security update. File: fedora_2018-915602df63.nasl - Type: ACT_GATHER_INFO |
2018-08-24 | Name: The remote Fedora host is missing one or more security updates. File: fedora_2018-79d7c3d2df.nasl - Type: ACT_GATHER_INFO |
2018-08-06 | Name: The remote Fedora host is missing one or more security updates. File: fedora_2018-49bda79bd5.nasl - Type: ACT_GATHER_INFO |
2018-07-27 | Name: A server virtualization platform installed on the remote host is affected by ... File: citrix_xenserver_CTX235748.nasl - Type: ACT_GATHER_INFO |
2018-07-24 | Name: The remote Fedora host is missing a security update. File: fedora_2018-1a467757ce.nasl - Type: ACT_GATHER_INFO |
2018-06-29 | Name: The remote Debian host is missing a security-related update. File: debian_DSA-4236.nasl - Type: ACT_GATHER_INFO |