Executive Summary
Summary | |
---|---|
Title | Red Hat Ceph Storage 3.3 security, bug fix, and enhancement update |
Informations | |||
---|---|---|---|
Name | RHSA-2019:2538 | First vendor Publication | 2019-08-21 |
Vendor | RedHat | Last vendor Modification | 2019-08-21 |
Severity (Vendor) | N/A | Revision | 01 |
Security-Database Scoring CVSS v3
Cvss vector : N/A | |||
---|---|---|---|
Overall CVSS Score | NA | ||
Base Score | NA | Environmental Score | NA |
impact SubScore | NA | Temporal Score | NA |
Exploitabality Sub Score | NA | ||
Calculate full CVSS 3.0 Vectors scores |
Security-Database Scoring CVSS v2
Cvss vector : (AV:N/AC:L/Au:N/C:P/I:N/A:N) | |||
---|---|---|---|
Cvss Base Score | 5 | Attack Range | Network |
Cvss Impact Score | 2.9 | Attack Complexity | Low |
Cvss Expoit Score | 10 | Authentication | None Required |
Calculate full CVSS 2.0 Vectors scores |
Detail
Problem Description: An update is now available for Red Hat Ceph Storage 3.3 on Red Hat Enterprise Linux 7. Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. 2. Relevant releases/architectures: Red Hat Ceph Storage 3.3 MON - ppc64le, x86_64 Red Hat Ceph Storage 3.3 OSD - ppc64le, x86_64 Red Hat Ceph Storage 3.3 Tools - noarch, ppc64le, x86_64 3. Description: Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. Security Fix(es): * ceph: ListBucket max-keys has no defined limit in the RGW codebase (CVE-2018-16846) * ceph: debug logging for v4 auth does not sanitize encryption keys (CVE-2018-16889) * ceph: authenticated user with read only permissions can steal dm-crypt / LUKS key (CVE-2018-14662) For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bug Fix(es) and Enhancement(s): For detailed information on changes in this release, see the Red Hat Ceph Storage 3.3 Release Notes available at: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3.3/html /release_notes/index 4. Solution: For details on how to apply this update, which includes the changes described in this advisory, refer to: https://access.redhat.com/articles/11258 5. Bugs fixed (https://bugzilla.redhat.com/): 1337915 - purge-cluster.yml confused by presence of ceph installer, ceph kernel threads 1572933 - infrastructure-playbooks/shrink-osd.yml leaves behind NVMe partition; scenario non-collocated 1599852 - radosgw-admin bucket rm --bucket=${bucket} --bypass-gc --purge-objects not cleaning up objects in secondary site 1627567 - MDS fails heartbeat map due to export size 1628309 - MDS should handle large exports in parts 1628311 - MDS balancer may stop prematurely 1631010 - batch: allow journal+block.db sizing on the CLI 1636136 - [cee/sd] add ceph_docker_registry to group_vars/all.yml.sample same way as ceph-ansible does allowing custom registry for systems without direct internet access 1637327 - CVE-2018-14662 ceph: authenticated user with read only permissions can steal dm-crypt / LUKS key 1639712 - dynamic bucket resharding unexpected behavior 1644321 - lvm scenario - stderr: Device /dev/sdb excluded by a filter 1644461 - CVE-2018-16846 ceph: ListBucket max-keys has no defined limit in the RGW codebase 1644610 - [RFE] allow --no-systemd flag for 'simple' sub-command 1644847 - [RFE] ceph-volume zap enhancements based on the OSD ID instead of a device 1651054 - [iSCSI-container] - After cluster purge and recreation, iSCSI target creation failed. 1656908 - [ceph-ansible] Ceph nfs installation fails at task start nfs gateway service in ubuntu ipv6 deployment 1659611 - ceph ansible rolling upgrade does not restart tcmu-runner and rbd-target-api 1661504 - [RFE] append x-amz-version-id in PUT response 1665334 - CVE-2018-16889 ceph: debug logging for v4 auth does not sanitize encryption keys 1666822 - ceph-volume does not always populate dictionary key rotational 1668478 - Failed to Purge Cluster 1668896 - Ability to search by access-key using the radosgw-admin tool [Consulting] 1668897 - Ability to register/associate one email to multiple user accounts [Consulting] 1669838 - [RFE] Including some rgw bits in mgr-restful plugin 1670527 - if LVM is not installed containers don't come up after a system reboot 1670785 - rbd-target-api.service doesn't get started after starting rbd-target-gw.service. 1677269 - Need to add port 9283/tcp to /usr/share/cephmetrics-ansible/roles/ceph-node-exporter/tasks/configure_firewall.yml 1680144 - [RFE] RGW metadata search support for elastic search 6.0 API changes 1680155 - ceph-ansible is configuring VIP address for MON and RGW 1685253 - ceph-ansible non-collocated OSD scenario should not create block.wal by default 1685734 - MDS `cache drop` command does not timeout as expected 1686306 - [ceph-ansible] shrink-osd.yml fails at stopping osd service task 1695850 - ceph-ansible containerized Ceph MDS is limited to 1 CPU core by default - not enough 1696227 - [RFE] print client IP in default debug_ms log level when "bad crc in {front|middle|data}" occurs 1696691 - [CEE/SD] 'ceph osd in any' marks all osds 'in' even if the osds are removed completely from the Ceph cluster. 1696880 - ceph ansible 3.x still sets memory option if 1700896 - Update nfs-ganesha to 2.7.4 1701029 - [RFE] GA support for ASIO/Beast HTTP Frontend 1702091 - nofail option is unsupported in the kernel driver 1702092 - MDS may report spurious warning during subtree migration 1702093 - MDS may hit an assertion during shutdown 1702097 - MDS does not initialize based on config mds_cap_revoke_eviction_timeout 1702099 - MDS may return ENOSPC for a series of renames to a target directory 1702100 - MDS may crash during reconnect when processing reconnect message 1702285 - It takes significantly longer to deploy bluestore than filestore on the same hardware 1702732 - [ceph-ansible] - group_vars files says that default values are based in RHCS 2.x hardware guide 1703557 - rgw: object expirer: handle resharded buckets 1704948 - [Rebase] rebase ceph to 12.2.12 1705258 - RGW: expiration_date returned from lifecycle is in wrong format. [Consulting] 1705922 - Getting versioning state of non-existing bucket returns HTTP Response 200 1708346 - Memory growth when enabling rgw_enable_ops_log = True with no consumption of queue 1708650 - PUT Bucket Lifecycle doesn't clear existing lifecycle policy 1708798 - rgw: luminous: keystone: backport keystone S3 credential caching 1709765 - [RGW]: Radosgw unable to start post upgrade to latest Luminous build 1710855 - nfs ganesha crashed due to invalid rgw_fh pointer passed by FSAL_RGW ? 1713779 - rgw-multisite: 'radosgw-admin bilog trim' stops after 1000 entries 1714810 - MDS may hang during up:rejoin while iterating inodes 1714814 - MDS may try trimming all of its journal at once after recovery 1715577 - [Consulting] Ceph Balancer not working with EC/upmap configuration 1715946 - [RGW-NFS]: objects stored on nfs mount may have inconsistent tail tag and fail to gc 1717135 - S3 client timed out in RGW - listing the large buckets having ~14 million objects with 256 bucket index shards 1718135 - Multiple MDS crashing with assert(mds->sessionmap.get_version() == cmapv) in ESessions::replay while replaying journal 1718328 - S3 client timed out in RGW while listing buckets having 2 million to 5 million objects. 1719023 - ceph-validate : devices are not validated in non-collocated and lvm_batch scenario 1720205 - [GSS] MONs continuously calling for election on lease expiry 1720741 - [RGW] bucket_list on large bucket causing application to not startup, and performance impact on all other clients using RGW 1721165 - MDS session reference count may leak due to regression in 12.2.11 1722663 - ceph-ansible: purge-cluster.yml fails when initiated second time 1722664 - radosgw-admin bucket rm fails to remove a bucket with error "aborted 152 incomplete multipart uploads" 1725521 - Config parser error when import rados config which larger than 1024 bytes 1725536 - few OSDs are not coming up and log error "In function 'void KernelDevice::_aio_thread()' thread 7f3e4ead9700 ... bluestore/KernelDevice.cc: 397: FAILED assert(0 == "unexpected aio error" 1732142 - [RFE] Changing BlueStore OSD rocksdb_cache_size default value to 512MB for helping in compaction 1732706 - [RGW-NFS]: nfs-ganesha aborts due to "Cannot acquire credentials for principal nfs" 1734550 - GetBucketLocation on non-existing bucket doesn't throw NoSuchBucket and gives 200 1739209 - [ceph-ansible] - rolling-update of containerized cluster from 2.x to 3.x failed trying to run systemd-device-to-id.sh saying no such file |
Original Source
Url : https://rhn.redhat.com/errata/RHSA-2019-2538.html |
CPE : Common Platform Enumeration
Type | Description | Count |
---|---|---|
Application | 3 | |
Application | 2 | |
Os | 4 | |
Os | 2 | |
Os | 1 | |
Os | 1 |
Alert History
Date | Informations |
---|---|
2020-03-19 13:19:01 |
|