{"id":158,"date":"2025-02-11T15:32:52","date_gmt":"2025-02-11T15:32:52","guid":{"rendered":"https:\/\/blog.ivos.place\/?p=158"},"modified":"2025-02-12T06:30:14","modified_gmt":"2025-02-12T06:30:14","slug":"clash-of-data-ingestion-comparing-cu-usage-in-fabric","status":"publish","type":"post","link":"https:\/\/blog.ivos.place\/?p=158","title":{"rendered":"Clash of Data Ingestion: Comparing CU Usage (and other factors) in Fabric"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Disclaimer<\/h2>\n\n\n\n<p>This blog post contains information about CU usage in Microsoft Fabric. Since I am not a specialist in capacity usage and the Usage Metrics App is still evolving, please consider this information as indicative rather than definitive. Additionally, the choice of technology always <strong>depends<\/strong> on best practices, specific frameworks, and the customer\u2019s unique situation.<\/p>\n\n\n\n<p>Pleas also consider the limited amount of data. These extension tests are based on the WWI-Sample database and contains only a few tables of the Sales area:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"313\" height=\"428\" src=\"https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-2.png\" alt=\"\" class=\"wp-image-166\" srcset=\"https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-2.png 313w, https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-2-219x300.png 219w\" sizes=\"auto, (max-width: 313px) 100vw, 313px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Idea<\/h2>\n\n\n\n<p>Following my previous <a href=\"https:\/\/blog.ivos.place\/?p=137\" data-type=\"link\" data-id=\"https:\/\/blog.ivos.place\/?p=137\">blog post<\/a>, I\u2019d like to explore alternative methods for data ingestion using Microsoft&#8217;s World Wide Importers sample database. My goal is to compare the CU usage across different ingestion technologies in Microsoft Fabric. In addition, I aim to offer recommendations and ratings for each technology. This post will provide a quick overview of various ingestion approaches based on an Azure SQL Database, with the hope that it will help you with your own projects.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Results<\/h2>\n\n\n\n<p>I aim to not only assess the <strong>CU usage<\/strong> of the different technologies but also evaluate the <strong>overall duration <\/strong>of the ingestion process, the <strong>flexibility<\/strong>, and the <strong>intuitiveness <\/strong>of each respective technology.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">CU usage<\/h3>\n\n\n\n<p>Let\u2019s address the elephant in the room first. Phew, I must admit that I found it quite challenging to get accurate numbers for the CU usage. For example, with ingestion via the notebook, the service took some time just to start the session. I couldn\u2019t find an easy way to separate the CU usage for session startup from the actual execution. Despite this, I did some retesting and believe I got at least close to the correct figures. Of course, CU usage can vary depending on how it\u2019s implemented in your framework and your specific projects. For instance, if you start a new session for each notebook to load data, the CU usage overhead might be higher.<\/p>\n\n\n\n<p>Looking at the metrics app, I found that <strong>Dataflow Gen2<\/strong> seems to consume the most CUs, while <strong>notebooks <\/strong>perform significantly better than <strong>Data Factory Pipelines<\/strong> with copy activities\u2014this disparity is what motivated me to write this blog post. This fact brings us to an important takeaway: <strong>If you&#8217;re dealing with many small tables, consider switching from Pipelines to Notebooks.<\/strong> Pipeline Copy Activities always round up to a full minute (as I explained in my previous <a href=\"https:\/\/blog.ivos.place\/?p=137\" data-type=\"link\" data-id=\"https:\/\/blog.ivos.place\/?p=137\">blog post<\/a>), which can lead to unnecessary CU consumption. Moving to Notebooks can help you optimize both performance and costs.<\/p>\n\n\n\n<p><strong>Database mirroring<\/strong> is a bit of a deal-breaker, considering that the CU usage includes the initial setup of the mirrored database. However, it\u2019s also difficult to determine which activities are strictly related to the mirroring, as there were numerous other operations happening in the target lakehouse (where I loaded the data). I focused on the relevant timestamps and took all the lakehouse operations into account (WWI-Sample represents the mirrored database).<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"170\" src=\"https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-1024x170.png\" alt=\"\" class=\"wp-image-159\" srcset=\"https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-1024x170.png 1024w, https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-300x50.png 300w, https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-768x128.png 768w, https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image.png 1289w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Nevertheless, here are the results for the CU usage:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"493\" height=\"386\" src=\"https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-1.png\" alt=\"\" class=\"wp-image-160\" srcset=\"https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-1.png 493w, https:\/\/blog.ivos.place\/wp-content\/uploads\/2025\/02\/image-1-300x235.png 300w\" sizes=\"auto, (max-width: 493px) 100vw, 493px\" \/><\/figure>\n\n\n\n<p>Microsoft describes database mirroring as a low-cost technology, which is absolutely true\u2014the initial mirroring process is very resource-efficient. However, using a mirrored database isn\u2019t always an option in every environment. In my real-world scenario, which led me to this topic, mirroring wasn\u2019t viable because I needed a complete capture of the entire day as Parquet files in my destination lakehouse for archival purposes. Achieving this efficiently requires full control over the ingestion process, which isn\u2019t easily possible with mirroring.<\/p>\n\n\n\n<p>Let&#8217;s have a look at the other evaluation factors.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Overall duration<\/h3>\n\n\n\n<p>In terms of overall duration, <strong>notebook <\/strong>execution was the fastest, completing the ingestion of all Sales tables from the WWI-Sample database in just 44 seconds. This was followed by <strong>Dataflows Gen2<\/strong> at 49 seconds and <strong>Data Factory Pipelines<\/strong> at 88 seconds. The entire <strong>database mirroring<\/strong> process took about 5 minutes from setup to the first full synchronization, but once configured, it could be extremely fast for handling new data in near real-time.<\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<figure class=\"wp-block-table is-style-regular\"><table class=\"has-fixed-layout\"><tbody><tr><th>Type\n        <\/th><th>Duration [s]\n    <\/th><\/tr><tr><td>Notebook<\/td><td>44<\/td><\/tr><tr><td>Dataflow Gen 2<\/td><td>49<\/td><\/tr><tr><td>Data Factory Pipeline<\/td><td>88<\/td><\/tr><tr><td>Mirrored DB<\/td><td>300*<\/td><\/tr><\/tbody><\/table><\/figure>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Flexibility<\/h3>\n\n\n\n<p>If we focus more on real-time projects\u2014like mine originally was\u2014both Dataflows and the Mirrored DB are immediately out of consideration. I\u2019ve already mentioned the need for an &#8220;archive&#8221; function, but when it comes to Dataflows, the main drawback for me is the lack of programmability. I wasn\u2019t able to easily ingest all tables within the Sales schema of the database. Yes, Power Query offers extensive transformation capabilities, but when it comes to pure data ingestion, the process feels too rigid for my needs.<\/p>\n\n\n\n<p>That said, I still want to give some credit for how configurable each ingestion method is in adapting to specific requirements. This brings me to my (very personal) evaluation:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><th>Type<\/th><th>Flexibility<\/th><\/tr><tr><td>Notebook<\/td><td>\u2605\u2605\u2605\u2605\u2605<\/td><\/tr><tr><td>Dataflow Gen 2<\/td><td>\u2605\u2605<\/td><\/tr><tr><td>Data Factory Pipeline<\/td><td>\u2605\u2605\u2605\u2605<\/td><\/tr><tr><td>Mirrored DB<\/td><td>\u2605\u2605<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>You may see that I really like working with notebooks. I was amazed that I can ingest the data with just a few lines of code. The following code snippet is just the ingestion of the tables using the SQL server data dictionary, all lines of code before were just setting up the database connection and importing libraries.<\/p>\n\n\n\n<p>Imagine what you can do with all the custom libraries in Python! <\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span style=\"display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#2e3440ff\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"54\" height=\"14\" viewBox=\"0 0 54 14\"><g fill=\"none\" fill-rule=\"evenodd\" transform=\"translate(1 1)\"><circle cx=\"6\" cy=\"6\" r=\"6\" fill=\"#FF5F56\" stroke=\"#E0443E\" stroke-width=\".5\"><\/circle><circle cx=\"26\" cy=\"6\" r=\"6\" fill=\"#FFBD2E\" stroke=\"#DEA123\" stroke-width=\".5\"><\/circle><circle cx=\"46\" cy=\"6\" r=\"6\" fill=\"#27C93F\" stroke=\"#1AAB29\" stroke-width=\".5\"><\/circle><\/g><\/svg><\/span><span role=\"button\" tabindex=\"0\" data-code=\"# Define the SQL query or table name\nquery = &quot; (SELECT TABLE_SCHEMA, TABLE_NAME  FROM INFORMATION_SCHEMA.TABLES  WHERE TABLE_SCHEMA = 'Sales') as qry&quot;  # Replace with your schema and table name\n# query = &quot; SELECT 1&quot;\n# Load data from SQL Server into a Spark DataFrame using the ODBC driver\ndf = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)\n\n# Show the first few rows of the DataFrame\nfor tbl in df.collect():\n    if not tbl[1].endswith(&quot;_Archive&quot;):\n        df_read = df = spark.read.jdbc(\n            url=jdbc_url, table=f&quot;{tbl[0]}.{tbl[1]}&quot;, properties=connection_properties\n        )\n        df.write.mode(&quot;overwrite&quot;).parquet(f&quot;Files\/raw_ntb\/{tbl[1]}&quot;)\n\" style=\"color:#d8dee9ff;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki nord\" style=\"background-color: #2e3440ff\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #616E88\"># Define the SQL query or table name<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">query <\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\"> <\/span><span style=\"color: #ECEFF4\">&quot;<\/span><span style=\"color: #A3BE8C\"> (SELECT TABLE_SCHEMA, TABLE_NAME  FROM INFORMATION_SCHEMA.TABLES  WHERE TABLE_SCHEMA = &#39;Sales&#39;) as qry<\/span><span style=\"color: #ECEFF4\">&quot;<\/span><span style=\"color: #D8DEE9FF\">  <\/span><span style=\"color: #616E88\"># Replace with your schema and table name<\/span><\/span>\n<span class=\"line\"><span style=\"color: #616E88\"># query = &quot; SELECT 1&quot;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #616E88\"># Load data from SQL Server into a Spark DataFrame using the ODBC driver<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">df <\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\"> spark<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #D8DEE9FF\">read<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #88C0D0\">jdbc<\/span><span style=\"color: #ECEFF4\">(<\/span><span style=\"color: #D8DEE9\">url<\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\">jdbc_url<\/span><span style=\"color: #ECEFF4\">,<\/span><span style=\"color: #D8DEE9FF\"> <\/span><span style=\"color: #D8DEE9\">table<\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\">query<\/span><span style=\"color: #ECEFF4\">,<\/span><span style=\"color: #D8DEE9FF\"> <\/span><span style=\"color: #D8DEE9\">properties<\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\">connection_properties<\/span><span style=\"color: #ECEFF4\">)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #616E88\"># Show the first few rows of the DataFrame<\/span><\/span>\n<span class=\"line\"><span style=\"color: #81A1C1\">for<\/span><span style=\"color: #D8DEE9FF\"> tbl <\/span><span style=\"color: #81A1C1\">in<\/span><span style=\"color: #D8DEE9FF\"> df<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #88C0D0\">collect<\/span><span style=\"color: #ECEFF4\">():<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">    <\/span><span style=\"color: #81A1C1\">if<\/span><span style=\"color: #D8DEE9FF\"> <\/span><span style=\"color: #81A1C1\">not<\/span><span style=\"color: #D8DEE9FF\"> tbl<\/span><span style=\"color: #ECEFF4\">[<\/span><span style=\"color: #B48EAD\">1<\/span><span style=\"color: #ECEFF4\">].<\/span><span style=\"color: #88C0D0\">endswith<\/span><span style=\"color: #ECEFF4\">(<\/span><span style=\"color: #ECEFF4\">&quot;<\/span><span style=\"color: #A3BE8C\">_Archive<\/span><span style=\"color: #ECEFF4\">&quot;<\/span><span style=\"color: #ECEFF4\">):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">        df_read <\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\"> df <\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\"> spark<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #D8DEE9FF\">read<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #88C0D0\">jdbc<\/span><span style=\"color: #ECEFF4\">(<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">            <\/span><span style=\"color: #D8DEE9\">url<\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\">jdbc_url<\/span><span style=\"color: #ECEFF4\">,<\/span><span style=\"color: #D8DEE9FF\"> <\/span><span style=\"color: #D8DEE9\">table<\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #81A1C1\">f<\/span><span style=\"color: #A3BE8C\">&quot;<\/span><span style=\"color: #EBCB8B\">{<\/span><span style=\"color: #D8DEE9FF\">tbl<\/span><span style=\"color: #ECEFF4\">[<\/span><span style=\"color: #B48EAD\">0<\/span><span style=\"color: #ECEFF4\">]<\/span><span style=\"color: #EBCB8B\">}<\/span><span style=\"color: #A3BE8C\">.<\/span><span style=\"color: #EBCB8B\">{<\/span><span style=\"color: #D8DEE9FF\">tbl<\/span><span style=\"color: #ECEFF4\">[<\/span><span style=\"color: #B48EAD\">1<\/span><span style=\"color: #ECEFF4\">]<\/span><span style=\"color: #EBCB8B\">}<\/span><span style=\"color: #A3BE8C\">&quot;<\/span><span style=\"color: #ECEFF4\">,<\/span><span style=\"color: #D8DEE9FF\"> <\/span><span style=\"color: #D8DEE9\">properties<\/span><span style=\"color: #81A1C1\">=<\/span><span style=\"color: #D8DEE9FF\">connection_properties<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">        <\/span><span style=\"color: #ECEFF4\">)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D8DEE9FF\">        df<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #D8DEE9FF\">write<\/span><span style=\"color: #ECEFF4\">.<\/span><span style=\"color: #88C0D0\">mode<\/span><span style=\"color: #ECEFF4\">(<\/span><span style=\"color: #ECEFF4\">&quot;<\/span><span style=\"color: #A3BE8C\">overwrite<\/span><span style=\"color: #ECEFF4\">&quot;<\/span><span style=\"color: #ECEFF4\">).<\/span><span style=\"color: #88C0D0\">parquet<\/span><span style=\"color: #ECEFF4\">(<\/span><span style=\"color: #81A1C1\">f<\/span><span style=\"color: #A3BE8C\">&quot;Files\/raw_ntb\/<\/span><span style=\"color: #EBCB8B\">{<\/span><span style=\"color: #D8DEE9FF\">tbl<\/span><span style=\"color: #ECEFF4\">[<\/span><span style=\"color: #B48EAD\">1<\/span><span style=\"color: #ECEFF4\">]<\/span><span style=\"color: #EBCB8B\">}<\/span><span style=\"color: #A3BE8C\">&quot;<\/span><span style=\"color: #ECEFF4\">)<\/span><\/span>\n<span class=\"line\"><\/span><\/code><\/pre><\/div>\n\n\n\n<p>Data Factory Pipelines offer a high degree of customization through loops, parameters, variables, and more. However, I see notebooks as the clear winner in this regard. As mentioned earlier, the mirroring process has very limited flexibility, and Dataflows remain quite static.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Intuitiveness<\/h3>\n\n\n\n<p>Let\u2019s look at this from another perspective. What if your company doesn\u2019t have any Python expertise? What if you want to enable employees to set up data ingestion processes on their own, without needing deep technical knowledge?<\/p>\n\n\n\n<p>In this scenario, I see <strong>Dataflows as the clear winner<\/strong>. With its visual interface for transforming data, it provides the most user-friendly experience, making it the easiest ingestion method for non-technical users. <strong>Mirroring is also quite intuitive<\/strong>, but it\u2019s more of a straightforward selection process\u2014choosing which tables to ingest rather than configuring an entire workflow.<\/p>\n\n\n\n<p><strong>Data Factory Pipelines<\/strong>, on the other hand, require a deeper understanding of the technology, including how expressions and functions work within Azure Data Factory. <strong>Notebooks demand even more technical expertise<\/strong>, as a basic understanding of Python is essential. While they offer the most flexibility, they are not the ideal choice for teams without programming knowledge.<\/p>\n\n\n\n<p>So, when ease of use is the priority, Dataflows stand out as the most accessible option.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><th>Type<\/th><th>Intuitiveness<\/th><\/tr><tr><td>Notebook<\/td><td>\u2605<\/td><\/tr><tr><td>Dataflow Gen 2<\/td><td>\u2605\u2605\u2605\u2605\u2605<\/td><\/tr><tr><td>Data Factory Pipeline<\/td><td>\u2605\u2605\u2605<\/td><\/tr><tr><td>Mirrored DB<\/td><td>\u2605\u2605\u2605\u2605<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>As always, choosing the right technology depends on the specific needs of your project and customer scenario. In my case, I\u2019m leaning towards using notebooks within our framework for data ingestion. But before you jump in, ask yourself: <strong>Are your developers ready for this technology?<\/strong><\/p>\n\n\n\n<p>Finding the <strong>sweet spot<\/strong> between <strong>capacity usage, ease of use, and flexibility<\/strong> is like trying to balance a three-legged stool\u2014lean too far in one direction, and you might end up on the floor. My advice? Start with some <strong>Proof-of-Concept cases<\/strong> to test the waters. That way, you\u2019ll have a solid foundation to pick your favorite data ingestion method in Fabric\u2014without too many surprises along the way!<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Disclaimer This blog post contains information about CU usage in Microsoft Fabric. Since I am not a specialist in capacity usage and the Usage Metrics App is still evolving, please consider this information as indicative rather than definitive. Additionally, the choice of technology always depends on best practices, specific frameworks, and the customer\u2019s unique situation. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[19,14,15,16],"class_list":["post-158","post","type-post","status-publish","format-standard","hentry","category-fabric","tag-capacity","tag-fabric","tag-microsoft","tag-notebook"],"_links":{"self":[{"href":"https:\/\/blog.ivos.place\/index.php?rest_route=\/wp\/v2\/posts\/158","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.ivos.place\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.ivos.place\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.ivos.place\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.ivos.place\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=158"}],"version-history":[{"count":5,"href":"https:\/\/blog.ivos.place\/index.php?rest_route=\/wp\/v2\/posts\/158\/revisions"}],"predecessor-version":[{"id":168,"href":"https:\/\/blog.ivos.place\/index.php?rest_route=\/wp\/v2\/posts\/158\/revisions\/168"}],"wp:attachment":[{"href":"https:\/\/blog.ivos.place\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=158"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.ivos.place\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=158"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.ivos.place\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=158"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}