Executive Summary

Summary
Title Red Hat Ceph Storage 2.5 security and bug fix update
Informations
Name RHSA-2019:0747 First vendor Publication 2019-04-11
Vendor RedHat Last vendor Modification 2019-04-11
Severity (Vendor) N/A Revision 01

Security-Database Scoring CVSS v3

Cvss vector : N/A
Overall CVSS Score NA
Base Score NA Environmental Score NA
impact SubScore NA Temporal Score NA
Exploitabality Sub Score NA
 
Calculate full CVSS 3.0 Vectors scores

Security-Database Scoring CVSS v2

Cvss vector : (AV:N/AC:L/Au:S/C:P/I:N/A:N)
Cvss Base Score 4 Attack Range Network
Cvss Impact Score 2.9 Attack Complexity Low
Cvss Expoit Score 8 Authentication Requires single instance
Calculate full CVSS 2.0 Vectors scores

Detail

Problem Description:

An update for ceph and grafana is now available for Red Hat Ceph Storage 2.5 for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

2. Relevant releases/architectures:

Red Hat Ceph Storage 2.5 MON - x86_64 Red Hat Ceph Storage 2.5 OSD - x86_64 Red Hat Ceph Storage 2.5 Tools - x86_64

3. Description:

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

* grafana: File exfiltration (CVE-2018-19039)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

* This issue was discovered with OpenStack Cinder Backup when 'rados_connect_timeout' was set. Normally the timeout is not enabled. If the cluster was highly loaded the timeout could be reached, causing the segfault. With this update to Red Hat Ceph Storage, if the timeout is reached a segfault no longer occurs. (BZ#1655685)

* With this release, you now have the ability to reset a user's statistics using the 'radosgw-admin' command. In previous versions, the user's recorded statistics diverged from the actual statistics. When using the '--reset-stats' option with the 'radosgw-admin' command, along with specifying the Ceph Object Gateway user, the stats will be recalculated. (BZ#1673217)

* In the duplicate checking code an inconsistency was found that caused duplicate indices to be added, instead of trimming them. The duplicate checking code logic has been fixed, making adding and trimming duplicate indices consistent, which results in correctly trimming duplicate indices. (BZ#1676709)

* Two bugs were found in the garbage collection list iteration logic. One of these bugs was a race condition when doing system restarts. These bugs were causing higher-than-expected workloads and stalling in garbage collection processing. Issues with list truncation and entry deletion were fixed, reducing the potential for garbage collection stalls and high-read I/O during garbage collection removal. (BZ#1680050)

* Due to a bug in multi-site sync of versioning-suspended buckets, certain object versioning attributes were overwritten with incorrect values. Consequently, the objects failed to sync and attempted to retry endlessly, blocking further sync progress. With this update, the sync process no longer overwrites versioning attributes. In addition, any broken attributes are now detected and repaired. As a result, objects are synced correctly in versioning-suspended buckets. (BZ#1690927)

* Previously, bucket indices could include "false entries" that did not represent actual objects and that resulted from a prior bug. Consequently, during the process of deleting such buckets, encountering a false entry caused the process to stop and return an error code. With this update, when a false entry is encountered, Ceph ignores it, and deleting buckets with false entries works as expected. (BZ#1690930)

4. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1493597 - Performing a manila access-allow on an existing auth entry in Ceph corrupts the permissions. 1565221 - "set_fact docker_exec_cmd" assumes there will be mons, but does not use the external list of mons if provided 1649697 - CVE-2018-19039 grafana: File exfiltration 1655685 - rbd_snap_list_end() segfaults if rbd_snap_list() fails 1660611 - Intermittent S3 bucket list and swift container list are broken after upgrading to RHCS 2.5.z2 - 10.2.10-40.el7cp 1676709 - ceph-osd continuous memory growth one of the daemons using 50G+ RSS 1680050 - [RHCS 2.x] GC erratic performance, very slow deletion performance 1690922 - RGW memory leak OOM in a multisite environment 1690927 - multisite sync errors from operations on a versioning-suspended bucket 1690930 - Customer cannot delete versioned bucket 1690932 - rgw-multisite: bilog entries not getting trimmed in both sites 1690934 - Fix issue with concurrent operations on versioned objects

Original Source

Url : https://rhn.redhat.com/errata/RHSA-2019-0747.html

CWE : Common Weakness Enumeration

% Id Name
100 % CWE-200 Information Exposure

CPE : Common Platform Enumeration

TypeDescriptionCount
Application 22
Application 1
Application 1
Application 1
Os 1
Os 1
Os 1

Alert History

If you want to see full details history, please login or register.
0
Date Informations
2020-03-19 13:18:08
  • First insertion