Executive Summary

Summary
Title Red Hat Ceph Storage 3.2 security, bug fix, and enhancement update
Informations
Name RHSA-2019:0911 First vendor Publication 2019-04-30
Vendor RedHat Last vendor Modification 2019-04-30
Severity (Vendor) N/A Revision 01

Security-Database Scoring CVSS v3

Cvss vector : N/A
Overall CVSS Score NA
Base Score NA Environmental Score NA
impact SubScore NA Temporal Score NA
Exploitabality Sub Score NA
 
Calculate full CVSS 3.0 Vectors scores

Security-Database Scoring CVSS v2

Cvss vector : (AV:N/AC:L/Au:S/C:P/I:N/A:N)
Cvss Base Score 4 Attack Range Network
Cvss Impact Score 2.9 Attack Complexity Low
Cvss Expoit Score 8 Authentication Requires single instance
Calculate full CVSS 2.0 Vectors scores

Detail

Problem Description:

An update is now available for Red Hat Ceph Storage 3.2.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Ceph Storage 3.2 MON - ppc64le, x86_64 Red Hat Ceph Storage 3.2 OSD - ppc64le, x86_64 Red Hat Ceph Storage 3.2 Tools - noarch, ppc64le, x86_64

3. Description:

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

* grafana: File exfiltration (CVE-2018-19039)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es) and Enhancement(s)

For detailed information on changes in this release, see the Red Hat Ceph Storage 3.2 Release Notes available at:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3.2/html /release_notes/index

4. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1506782 - osd_scrub_auto_repair not working as expected 1540881 - [CEE/SD] monitor_interface with "-" in the name fails with "msg": "'dict object' has no attribute u'ansible_bond-monitor-interface'" 1593110 - Ceph mgr daemon crashing after starting balancer module in automatic mode 1600138 - [Bluestore]: one of the osds flapped multiple times with 1525: FAILED assert(0 == "bluefs enospc") 1636251 - ceph-keys fails if RHEL is configured in FIPS mode 1638092 - Default crush rule is not enforced 1639833 - [RFE] Enabling CRUSH device classes should not incur data movement in the cluster 1648168 - ceph-validate : devices are not validated in non-collocated and lvm_batch scenario 1649697 - CVE-2018-19039 grafana: File exfiltration 1653307 - [ceph-ansible] - lvms not removed while purging cluster 1656935 - ceph-ansible: purge-cluster.yml fails when initiated second time 1660962 - rgw does not support delimiter as a string it only supports a single character [consulting] 1664869 - [RFE] Support configuring multiple RGW endpoints in ceph-ansible for RGW multisite 1666407 - MDS may hang at startup if PurgeQueue metadata objects are damaged 1666408 - ceph-fuse may miss reconnect during MDS switch 1666409 - MDS should allow configuration of heartbeat timeout 1668050 - [RFE] RGW OPA authorization tech preview 1668362 - Verify PG recovery control / 3 line items from BB spreadsheet 1669901 - [RFE] Implement mechanism and command to change/reset bucket objects owner / RGW bucket chown 1670165 - Bucket lifecycle: bucket is not getting added to lc list when`'NoncurrentVersionExpiration': {'NoncurrentDays': 2}` is set 1670321 - [GSS] Downloads are corrupted when using RGW with civetweb as frontend 1670663 - [Ceph-Ansible][ceph-containers] Add new OSD node to the existing ceph cluster is failing with '--limit osds' option 1672333 - Optimize MDS stale cap revoke behavior 1672878 - [Ceph-Ansible][ceph-containers] Missing permission for MDS in client.admin 1673687 - Failure creating ceph.conf for mon - No first item, sequence was empty. 1674549 - [cee/sd][ceph-mgr] luminous: deadlock in standby ceph-mgr daemons 1678470 - BlueStore OSD crashes in _do_read - BlueStore::_do_read 1679263 - radosgw-admin bucket limit check stuck generating high read ops with > 999 buckets per user [Consulting] 1680171 - containerized radosgw requires higher --cpu-quota as default 1683997 - permissions in /var/lib/ceph/mon aren't set properly 1684146 - Ability to start ceph daemons with numactl 1684283 - Ceph Containers SSL support - Daemons like RGW when using rgw-multisite causing an issue in communication and sync stuck 1684289 - Testing RGW Multi-site SSL support 1684435 - Bucket lifecycle: Current version of the object does not get deleted for Tag based filters. 1684642 - [RFE] rgw-multisite: add perf counters to data sync 1685733 - MDS may abort when handling deleted file 1685735 - Monitors will assign standby-replay to degraded ranks 1687038 - os/filestore: ceph_abort() on fsync(2) or fdatasync(2) failure 1687039 - osd/PG.cc: account for missing set irrespective of last_complete 1687041 - mon/OSDMonitor: do not populate void pg_temp into nextmap 1687567 - rgw: use of PK11_ImportSymKey implies non-FIPS-compliant key management workflow (blocks FIPS) 1687828 - [cee/sd][ceph-ansible] rolling-update.yml does not restart nvme osds running in containers 1688330 - Request for backport for fixed issue https://tracker.ceph.com/issues/21533 1688378 - ops waiting for resharding to complete may not be able to complete when resharding does complete 1688541 - command `radosgw-admin bi put` not rightly set the mtime 1688869 - rgw: Lifecyle: handle resharded buckets 1689266 - rgw: unordered bucket listing markers do not handle adorned object names correctly 1689410 - s3cmd info not working on Ceph 3.2 (cors policies) giving 500 (Internal Server Error) 1690941 - Some multipart uploads with SSE-C are corrupted 1692555 - 'radosgw-admin sync status' does not show timestamps for master zone 1693445 - rgw-multisite sync stuck recovering shard in already deleted versioned bucket 1695174 - rgw: fix eval bucket policies and perms permissions for non-existent objects 1699478 - rgw-multisite: log trimming does not make progress unless zones 'sync_from_all' 1701970 - Inefficient unordered bucket listing 1702311 - [cee/sd][ceph-ansible] shink-osd.yml is failing due to missing osd_fsid in " ceph --cluster ceph osd find 0" output

Original Source

Url : https://rhn.redhat.com/errata/RHSA-2019-0911.html

CWE : Common Weakness Enumeration

% Id Name
100 % CWE-200 Information Exposure

CPE : Common Platform Enumeration

TypeDescriptionCount
Application 22
Application 1
Application 1
Application 1
Os 1
Os 1
Os 1

Alert History

If you want to see full details history, please login or register.
0
Date Informations
2020-03-19 13:18:11
  • First insertion