Yeah, that's what I figured, early morning, so the grass has time to comfortably soak it in, but if I read the prior correctly, they were suggesting 10p - 6a.

Nah, you chose your bed.. EIGHT?

I'm sure you're happy, but geeeeeeeeeeeez

I'm lucky to manage myself

Genuinely curious, I've seen mixed suggestions on WHEN to water, and those against the time frame you suggest say it can lead to disease/mold due to slower evaporation.

Unfortunately, I learned this shortly after posting myself lmao

Just redid all 9.. smh

Agronopolopogis
1Edited
Agronis

FWIW, you can track your progress (leaving for others that landed here like me)

Journey -> Triumps -> Pale Heart -> Exploration -> Tower Defender

Nope..

To achieve something similar with a struct, it would mean that I am coupling (very loosely, but still) multiple implementations.

package main

import (
    "fmt"
)

func main() {
    httpService := NewHTTPService(WithPort(8080), WithTimeout(60))
    httpService.Start()

    dbService := NewDBService(WithConnString("localhost:3306"), WithPoolSize(20))
    dbService.Start()
}

// ServiceOption is a function that configures a service.
type ServiceOption func(Service)

// Service is an interface that any service must implement.
type Service interface {
    Start() error
}

// HTTPService is one implementation of the Service interface.
type HTTPService struct {
    port    int
    timeout int
}

// NewHTTPService creates a new HTTPService with the given options.
func NewHTTPService(opts ...ServiceOption) *HTTPService {
    s := &HTTPService{
        port:    80,
        timeout: 30,
    }
    for _, opt := range opts {
        opt(s)
    }
    return s
}

// Start starts the HTTP service.
func (s *HTTPService) Start() error {
    fmt.Printf("Starting HTTP service on port %d with timeout %d\n", s.port, s.timeout)
    return nil
}

// WithPort sets the port for the HTTPService.
func WithPort(port int) ServiceOption {
    return func(s Service) {
        if httpService, ok := s.(*HTTPService); ok {
            httpService.port = port
        }
    }
}

// WithTimeout sets the timeout for the HTTPService.
func WithTimeout(timeout int) ServiceOption {
    return func(s Service) {
        if httpService, ok := s.(*HTTPService); ok {
            httpService.timeout = timeout
        }
    }
}

// DBService is another implementation of the Service interface.
type DBService struct {
    connString string
    poolSize   int
}

// NewDBService creates a new DBService with the given options.
func NewDBService(opts ...ServiceOption) *DBService {
    s := &DBService{
        connString: "localhost:5432",
        poolSize:   10,
    }
    for _, opt := range opts {
        opt(s)
    }
    return s
}

// Start starts the DB service.
func (s *DBService) Start() error {
    fmt.Printf("Starting DB service with connection string %s and pool size %d\n", s.connString, s.poolSize)
    return nil
}

// WithConnString sets the connection string for the DBService.
func WithConnString(connString string) ServiceOption {
    return func(s Service) {
        if dbService, ok := s.(*DBService); ok {
            dbService.connString = connString
        }
    }
}

// WithPoolSize sets the pool size for the DBService.
func WithPoolSize(poolSize int) ServiceOption {
    return func(s Service) {
        if dbService, ok := s.(*DBService); ok {
            dbService.poolSize = poolSize
        }
    }
}

Table-driven tests are harder to debug when some test case fails, since it isn't obvious how to locate the actual test case args for the failed test.

To me, this is an indication of how the test is written and the tooling used.

For instance, between testify and mockery, it's rare that I struggle for more than a few moments to understand where the breakdown is, and if so, it's just a matter of debugging.

When an assertion fails or an error is tripped, it points me directly to the root of the issue.

Table-driven tests are harder to read, since you need to scroll down to the test loop in order to understand what the test does, before reading test cases.

I definitely don't disagree that in a lot of cases, table tests are more difficult to read than explicit, or f-tests as they're named here.

However, the maintenance/overhead in my experience drives the need to consolidate.

Counter to that though is using something like testify's Suite can help manage the stand up, tear down and validation abstraction to minimize your boilerplate but that still leaves you with a lot of repeated code throughout your tests.

Could you explain in more details why do you think so

It really does come down to maintenance for me.

If I update a method that alters the behavior to an extent where tests now fail, obviously, those tests need to be updated.

So which is easier? Address it in N separate test methods or one time in one method?

f-tests are successfully used in production instead of table-driven tests. The original article provides links to production tests. Could you give more details on what kind of copy pasta do you mean?

Yes, they absolutely are, and I don't knock anyone for using them, simply not my style and I've yet to encounter someone who felt strongly enough about either or to the point where the pull request would be in contention due to diverging patterns.

That's another factor.. while I may prefer table tests, if the accepted pattern for this codebase is f-tests styling, then thats what I'll adhere to.

I prefer Thanos in most production use cases.

Not needing to manage roll ups is a huge gain in performance, cost and time.

Is there a reason you don't prefer table driven?

Like, what's the problem you're trying to solve here?

Respectfully, not a single senior+ dev is going to agree with this approach.

You say this pattern is easier to read, and I don't necessarily disagree, but in production code your tests are going to be larger and more complex than what you have here, so the tradeoff is a ton of copy pasta code.

You've exchanged velocity and maintenance entirely for a problem most would agree isn't a problem.

IN GENERAL this is true.

I have a large male that squats, so to satisfy your semantic conundrum, if a dog urinates closely to the ground, the nitrogen is concentrated, whereas the opposite is more spread about, thus diluted.

This pattern is useful when being leveraged across interfaces, imo.

Thank you very much for taking the time.. I've got one more for you.

Bribery was not made legal—that’s absurd. The SC simply ruled that the current statute passed by Congress does not apply to payments made after the fact. But Congress (or any state legislature) can easily fix that problem in five minutes by clearing up the statute.

If I read you correctly based on can easily fix that problem, this operates under the assumption that those to benefit from this change directly, are also the ones we'd hope to enact legal measures to prevent it?

The clarification of "payments after the fact" just feels like "I'll do this for you, but you have to wait to give me the kickback." No?

If you think this is something.. their table side interaction will really set you back.

Genuine question.

In the past three decades, were any long-standing precedents unraveled like they have been as of late?

Privacy is the real issue behind Roe. Bribery was made legal, the backbone of every regulatory agency just got cracked in half, and we're advocating for punishing the homeless for being homeless..

These feel a bit extreme to call them normal

Can't tell from the imagery but I'd expect it to have a slight curve if it were a reflection teaching the path of the sun unless this is also the perfect location, angle and time of year to cause a straight line.

It's actually a blessing in disguise.

Build a contraption to pull a weighted 6x6 behind you so you can get that area nice and level before putting seed/sod down.

Having a relatively smooth base makes ALL the difference esthetically IMO

Electric zero here.. night and day difference in time and noise between my neighbors and I.

If you're not removing the blade, then you aren't checking for balance after the shave.

Bad balance on a blade can lead to numerous issues.

Agronopolopogis
1
🖍 👑 The Crayon King
11dLink

If I can consistently pull 12k in realized gains a month for six months, then I'm out of the rat race.

To me, it's more about consistency of execution that will give me that confidence.

This is the endeavor of reviews.. can't bird box that ish, gotta tear it apart.

Sure fire solution? Spray the area down after the fact, dilutes the nitrogen.

There are supplements that you can give your dog but I would suggest not going this route, as any vet will tell you that altering their urine pH can lead to stones, UTIs, etc.

The more arduous path? Train them where to pee.

I like others, treat the front yard as for show, and the backyard for family.

I produce about 3TBs of logs daily across systems along with about 1TB of metrics from prom alone, and I'm extremely selective on labels given the overhead.

High cardinality inherently suggests "this" topic isn't an ideal focus as a metric, but in some rare use cases, I'd push that over to Athena/Elasticache if need be. This avoids the brittleness introduced by logs.

I'm all for structured logs, just not as metrics.

Remember: You're in a software development sub, explaining how tooling works in an unsolicited manner as you did can come across very condescending :)