Total
443 CVE
CVE | Vendors | Products | Updated | CVSS v2 | CVSS v3 |
---|---|---|---|---|---|
CVE-2021-28709 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-12-09 | 6.9 MEDIUM | 7.8 HIGH |
issues with partially successful P2M updates on x86 T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode, to provide a way for them to later easily have more memory assigned. Guests are permitted to control certain P2M aspects of individual pages via hypercalls. These hypercalls may act on ranges of pages specified via page orders (resulting in a power-of-2 number of pages). In some cases the hypervisor carries out the requests by splitting them into smaller chunks. Error handling in certain PoD cases has been insufficient in that in particular partial success of some operations was not properly accounted for. There are two code paths affected - page removal (CVE-2021-28705) and insertion of new pages (CVE-2021-28709). (We provide one patch which combines the fix to both issues.) | |||||
CVE-2020-29570 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-12-09 | 4.9 MEDIUM | 6.2 MEDIUM |
An issue was discovered in Xen through 4.14.x. Recording of the per-vCPU control block mapping maintained by Xen and that of pointers into the control block is reversed. The consumer assumes, seeing the former initialized, that the latter are also ready for use. Malicious or buggy guest kernels can mount a Denial of Service (DoS) attack affecting the entire system. | |||||
CVE-2020-29566 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-12-09 | 4.9 MEDIUM | 5.5 MEDIUM |
An issue was discovered in Xen through 4.14.x. When they require assistance from the device model, x86 HVM guests must be temporarily de-scheduled. The device model will signal Xen when it has completed its operation, via an event channel, so that the relevant vCPU is rescheduled. If the device model were to signal Xen without having actually completed the operation, the de-schedule / re-schedule cycle would repeat. If, in addition, Xen is resignalled very quickly, the re-schedule may occur before the de-schedule was fully complete, triggering a shortcut. This potentially repeating process uses ordinary recursive function calls, and thus could result in a stack overflow. A malicious or buggy stubdomain serving a HVM guest can cause Xen to crash, resulting in a Denial of Service (DoS) to the entire host. Only x86 systems are affected. Arm systems are not affected. Only x86 stubdomains serving HVM guests can exploit the vulnerability. | |||||
CVE-2020-29571 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-12-09 | 4.9 MEDIUM | 6.2 MEDIUM |
An issue was discovered in Xen through 4.14.x. A bounds check common to most operation time functions specific to FIFO event channels depends on the CPU observing consistent state. While the producer side uses appropriately ordered writes, the consumer side isn't protected against re-ordered reads, and may hence end up de-referencing a NULL pointer. Malicious or buggy guest kernels can mount a Denial of Service (DoS) attack affecting the entire system. Only Arm systems may be vulnerable. Whether a system is vulnerable depends on the specific CPU. x86 systems are not vulnerable. | |||||
CVE-2020-29567 | 2 Fedoraproject, Xen | 2 Fedora, Xen | 2021-12-09 | 4.9 MEDIUM | 6.2 MEDIUM |
An issue was discovered in Xen 4.14.x. When moving IRQs between CPUs to distribute the load of IRQ handling, IRQ vectors are dynamically allocated and de-allocated on the relevant CPUs. De-allocation has to happen when certain constraints are met. If these conditions are not met when first checked, the checking CPU may send an interrupt to itself, in the expectation that this IRQ will be delivered only after the condition preventing the cleanup has cleared. For two specific IRQ vectors, this expectation was violated, resulting in a continuous stream of self-interrupts, which renders the CPU effectively unusable. A domain with a passed through PCI device can cause lockup of a physical CPU, resulting in a Denial of Service (DoS) to the entire host. Only x86 systems are vulnerable. Arm systems are not vulnerable. Only guests with physical PCI devices passed through to them can exploit the vulnerability. | |||||
CVE-2020-29486 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-12-09 | 4.9 MEDIUM | 6.0 MEDIUM |
An issue was discovered in Xen through 4.14.x. Nodes in xenstore have an ownership. In oxenstored, a owner could give a node away. However, node ownership has quota implications. Any guest can run another guest out of quota, or create an unbounded number of nodes owned by dom0, thus running xenstored out of memory A malicious guest administrator can cause a denial of service against a specific guest or against the whole host. All systems using oxenstored are vulnerable. Building and using oxenstored is the default in the upstream Xen distribution, if the Ocaml compiler is available. Systems using C xenstored are not vulnerable. | |||||
CVE-2021-28703 | 1 Xen | 1 Xen | 2021-12-09 | 6.9 MEDIUM | 7.0 HIGH |
grant table v2 status pages may remain accessible after de-allocation (take two) Guest get permitted access to certain Xen-owned pages of memory. The majority of such pages remain allocated / associated with a guest for its entire lifetime. Grant table v2 status pages, however, get de-allocated when a guest switched (back) from v2 to v1. The freeing of such pages requires that the hypervisor know where in the guest these pages were mapped. The hypervisor tracks only one use within guest space, but racing requests from the guest to insert mappings of these pages may result in any of them to become mapped in multiple locations. Upon switching back from v2 to v1, the guest would then retain access to a page that was freed and perhaps re-used for other purposes. This bug was fortuitously fixed by code cleanup in Xen 4.14, and backported to security-supported Xen branches as a prerequisite of the fix for XSA-378. | |||||
CVE-2015-6815 | 7 Arista, Canonical, Fedoraproject and 4 more | 11 Eos, Ubuntu Linux, Fedora and 8 more | 2021-11-30 | 2.7 LOW | 3.5 LOW |
The process_tx_desc function in hw/net/e1000.c in QEMU before 2.4.0.1 does not properly process transmit descriptor data when sending a network packet, which allows attackers to cause a denial of service (infinite loop and guest crash) via unspecified vectors. | |||||
CVE-2015-3456 | 3 Qemu, Redhat, Xen | 5 Qemu, Enterprise Linux, Enterprise Virtualization and 2 more | 2021-11-17 | 7.7 HIGH | N/A |
The Floppy Disk Controller (FDC) in QEMU, as used in Xen 4.5.x and earlier and KVM, allows local guest users to cause a denial of service (out-of-bounds write and guest crash) or possibly execute arbitrary code via the (1) FD_CMD_READ_ID, (2) FD_CMD_DRIVE_SPECIFICATION_COMMAND, or other unspecified commands, aka VENOM. | |||||
CVE-2021-28693 | 1 Xen | 1 Xen | 2021-09-21 | 2.1 LOW | 5.5 MEDIUM |
xen/arm: Boot modules are not scrubbed The bootloader will load boot modules (e.g. kernel, initramfs...) in a temporary area before they are copied by Xen to each domain memory. To ensure sensitive data is not leaked from the modules, Xen must "scrub" them before handing the page over to the allocator. Unfortunately, it was discovered that modules will not be scrubbed on Arm. | |||||
CVE-2021-28690 | 1 Xen | 1 Xen | 2021-09-21 | 4.0 MEDIUM | 6.5 MEDIUM |
x86: TSX Async Abort protections not restored after S3 This issue relates to the TSX Async Abort speculative security vulnerability. Please see https://xenbits.xen.org/xsa/advisory-305.html for details. Mitigating TAA by disabling TSX (the default and preferred option) requires selecting a non-default setting in MSR_TSX_CTRL. This setting isn't restored after S3 suspend. | |||||
CVE-2021-28687 | 1 Xen | 1 Xen | 2021-09-20 | 4.9 MEDIUM | 5.5 MEDIUM |
HVM soft-reset crashes toolstack libxl requires all data structures passed across its public interface to be initialized before use and disposed of afterwards by calling a specific set of functions. Many internal data structures also require this initialize / dispose discipline, but not all of them. When the "soft reset" feature was implemented, the libxl__domain_suspend_state structure didn't require any initialization or disposal. At some point later, an initialization function was introduced for the structure; but the "soft reset" path wasn't refactored to call the initialization function. When a guest nwo initiates a "soft reboot", uninitialized data structure leads to an assert() when later code finds the structure in an unexpected state. The effect of this is to crash the process monitoring the guest. How this affects the system depends on the structure of the toolstack. For xl, this will have no security-relevant effect: every VM has its own independent monitoring process, which contains no state. The domain in question will hang in a crashed state, but can be destroyed by `xl destroy` just like any other non-cooperating domain. For daemon-based toolstacks linked against libxl, such as libvirt, this will crash the toolstack, losing the state of any in-progress operations (localized DoS), and preventing further administrator operations unless the daemon is configured to restart automatically (system-wide DoS). If crashes "leak" resources, then repeated crashes could use up resources, also causing a system-wide DoS. | |||||
CVE-2017-2620 | 5 Citrix, Debian, Qemu and 2 more | 10 Xenserver, Debian Linux, Qemu and 7 more | 2021-08-04 | 9.0 HIGH | 9.9 CRITICAL |
Quick emulator (QEMU) before 2.8 built with the Cirrus CLGD 54xx VGA Emulator support is vulnerable to an out-of-bounds access issue. The issue could occur while copying VGA data in cirrus_bitblt_cputovideo. A privileged user inside guest could use this flaw to crash the QEMU process OR potentially execute arbitrary code on host with privileges of the QEMU process. | |||||
CVE-2020-29480 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-07-21 | 2.1 LOW | 2.3 LOW |
An issue was discovered in Xen through 4.14.x. Neither xenstore implementation does any permission checks when reporting a xenstore watch event. A guest administrator can watch the root xenstored node, which will cause notifications for every created, modified, and deleted key. A guest administrator can also use the special watches, which will cause a notification every time a domain is created and destroyed. Data may include: number, type, and domids of other VMs; existence and domids of driver domains; numbers of virtual interfaces, block devices, vcpus; existence of virtual framebuffers and their backend style (e.g., existence of VNC service); Xen VM UUIDs for other domains; timing information about domain creation and device setup; and some hints at the backend provisioning of VMs and their devices. The watch events do not contain values stored in xenstore, only key names. A guest administrator can observe non-sensitive domain and device lifecycle events relating to other guests. This information allows some insight into overall system configuration (including the number and general nature of other guests), and configuration of other guests (including the number and general nature of other guests' devices). This information might be commercially interesting or might make other attacks easier. There is not believed to be exposure of sensitive data. Specifically, there is no exposure of VNC passwords, port numbers, pathnames in host and guest filesystems, cryptographic keys, or within-guest data. | |||||
CVE-2020-29481 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-07-21 | 4.6 MEDIUM | 8.8 HIGH |
An issue was discovered in Xen through 4.14.x. Access rights of Xenstore nodes are per domid. Unfortunately, existing granted access rights are not removed when a domain is being destroyed. This means that a new domain created with the same domid will inherit the access rights to Xenstore nodes from the previous domain(s) with the same domid. Because all Xenstore entries of a guest below /local/domain/<domid> are being deleted by Xen tools when a guest is destroyed, only Xenstore entries of other guests still running are affected. For example, a newly created guest domain might be able to read sensitive information that had belonged to a previously existing guest domain. Both Xenstore implementations (C and Ocaml) are vulnerable. | |||||
CVE-2021-28692 | 1 Xen | 1 Xen | 2021-07-12 | 5.6 MEDIUM | 7.1 HIGH |
inappropriate x86 IOMMU timeout detection / handling IOMMUs process commands issued to them in parallel with the operation of the CPU(s) issuing such commands. In the current implementation in Xen, asynchronous notification of the completion of such commands is not used. Instead, the issuing CPU spin-waits for the completion of the most recently issued command(s). Some of these waiting loops try to apply a timeout to fail overly-slow commands. The course of action upon a perceived timeout actually being detected is inappropriate: - on Intel hardware guests which did not originally cause the timeout may be marked as crashed, - on AMD hardware higher layer callers would not be notified of the issue, making them continue as if the IOMMU operation succeeded. | |||||
CVE-2021-28689 | 1 Xen | 1 Xen | 2021-06-24 | 2.1 LOW | 5.5 MEDIUM |
x86: Speculative vulnerabilities with bare (non-shim) 32-bit PV guests 32-bit x86 PV guest kernels run in ring 1. At the time when Xen was developed, this area of the i386 architecture was rarely used, which is why Xen was able to use it to implement paravirtualisation, Xen's novel approach to virtualization. In AMD64, Xen had to use a different implementation approach, so Xen does not use ring 1 to support 64-bit guests. With the focus now being on 64-bit systems, and the availability of explicit hardware support for virtualization, fixing speculation issues in ring 1 is not a priority for processor companies. Indirect Branch Restricted Speculation (IBRS) is an architectural x86 extension put together to combat speculative execution sidechannel attacks, including Spectre v2. It was retrofitted in microcode to existing CPUs. For more details on Spectre v2, see: http://xenbits.xen.org/xsa/advisory-254.html However, IBRS does not architecturally protect ring 0 from predictions learnt in ring 1. For more details, see: https://software.intel.com/security-software-guidance/deep-dives/deep-dive-indirect-branch-restricted-speculation Similar situations may exist with other mitigations for other kinds of speculative execution attacks. The situation is quite likely to be similar for speculative execution attacks which have yet to be discovered, disclosed, or mitigated. | |||||
CVE-2020-29482 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-03-16 | 4.9 MEDIUM | 6.0 MEDIUM |
An issue was discovered in Xen through 4.14.x. A guest may access xenstore paths via absolute paths containing a full pathname, or via a relative path, which implicitly includes /local/domain/$DOMID for their own domain id. Management tools must access paths in guests' namespaces, necessarily using absolute paths. oxenstored imposes a pathname limit that is applied solely to the relative or absolute path specified by the client. Therefore, a guest can create paths in its own namespace which are too long for management tools to access. Depending on the toolstack in use, a malicious guest administrator might cause some management tools and debugging operations to fail. For example, a guest administrator can cause "xenstore-ls -r" to fail. However, a guest administrator cannot prevent the host administrator from tearing down the domain. All systems using oxenstored are vulnerable. Building and using oxenstored is the default in the upstream Xen distribution, if the Ocaml compiler is available. Systems using C xenstored are not vulnerable. | |||||
CVE-2020-29483 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-03-16 | 4.9 MEDIUM | 6.5 MEDIUM |
An issue was discovered in Xen through 4.14.x. Xenstored and guests communicate via a shared memory page using a specific protocol. When a guest violates this protocol, xenstored will drop the connection to that guest. Unfortunately, this is done by just removing the guest from xenstored's internal management, resulting in the same actions as if the guest had been destroyed, including sending an @releaseDomain event. @releaseDomain events do not say that the guest has been removed. All watchers of this event must look at the states of all guests to find the guest that has been removed. When an @releaseDomain is generated due to a domain xenstored protocol violation, because the guest is still running, the watchers will not react. Later, when the guest is actually destroyed, xenstored will no longer have it stored in its internal data base, so no further @releaseDomain event will be sent. This can lead to a zombie domain; memory mappings of that guest's memory will not be removed, due to the missing event. This zombie domain will be cleaned up only after another domain is destroyed, as that will trigger another @releaseDomain event. If the device model of the guest that violated the Xenstore protocol is running in a stub-domain, a use-after-free case could happen in xenstored, after having removed the guest from its internal data base, possibly resulting in a crash of xenstored. A malicious guest can block resources of the host for a period after its own death. Guests with a stub domain device model can eventually crash xenstored, resulting in a more serious denial of service (the prevention of any further domain management operations). Only the C variant of Xenstore is affected; the Ocaml variant is not affected. Only HVM guests with a stubdom device model can cause a serious DoS. | |||||
CVE-2020-29484 | 3 Debian, Fedoraproject, Xen | 3 Debian Linux, Fedora, Xen | 2021-03-16 | 4.9 MEDIUM | 6.0 MEDIUM |
An issue was discovered in Xen through 4.14.x. When a Xenstore watch fires, the xenstore client that registered the watch will receive a Xenstore message containing the path of the modified Xenstore entry that triggered the watch, and the tag that was specified when registering the watch. Any communication with xenstored is done via Xenstore messages, consisting of a message header and the payload. The payload length is limited to 4096 bytes. Any request to xenstored resulting in a response with a payload longer than 4096 bytes will result in an error. When registering a watch, the payload length limit applies to the combined length of the watched path and the specified tag. Because watches for a specific path are also triggered for all nodes below that path, the payload of a watch event message can be longer than the payload needed to register the watch. A malicious guest that registers a watch using a very large tag (i.e., with a registration operation payload length close to the 4096 byte limit) can cause the generation of watch events with a payload length larger than 4096 bytes, by writing to Xenstore entries below the watched path. This will result in an error condition in xenstored. This error can result in a NULL pointer dereference, leading to a crash of xenstored. A malicious guest administrator can cause xenstored to crash, leading to a denial of service. Following a xenstored crash, domains may continue to run, but management operations will be impossible. Only C xenstored is affected, oxenstored is not affected. |